Human Rights For Robots.

Intelligence includes (this list is not exclusive). Certain things on this list may not technically define intelligence but arise as a result of intelligence.

Intuition
Reasoning and deduction, problem solving
Posturing and theorising
Using experience (patterns) from a non-related field to solve a problem in a new, hitherto unseen problem field, without being shown or prompted
Understanding, comprehension
Defining and applying context to a situation
Discovery and exploration
Curiosity
The self-driven desire to seek new knowledge and experience
Using tools, inventing new tools
Exploring relationships with others
Having goals, aspirations
Conflict resolution
"Higher-level" thinking. Philosophy.
Ability to invent and work with abstract concepts.

There's many more but this is a start.

I've deliberately tried to frame these in human terms, to narrow down on human intelligence.


so yoy are changing the discussion from intelligence to human intelligence? why?

Anyway, machine's already exhibit most of that list. other terms are vague, like intution ir goals. My Amazon fire has the goak to suggest appropriate TV shows for me.
 
In fairness to FoxEye, once mentioning sentience and consciousness In suggestion of AI, there are academics who would suggest such "sentience" and "consciousness" is a far future (if ever) goal that is unlikely to occur in our lifetimes.

This position is often outlined by using the Chinese Room Thought experiment.
https://en.wikipedia.org/wiki/Chinese_room

I've personally always found the "consciousness" discussion diverting from the utility of Artificial Intelligence as a concept. It is likely that the "Turing Test" has been bested by a machine and my view is in line with Turing's, who said:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

One thing that the Chinese Room thought experiment does do, is highlight that we have very little understanding of consciousness (human or otherwise) and whilst passing the Turing test may only be a simulation of it, why does that matter, if machines can do as we do, does it really matter if we can or cannot show that they are conscious (or you are for that matter)?
 
Last edited:
here is the dictionary definition for good measure

It would seem that your Kindle Fire could hardly satisfying the dictionary definition of intelligence. It lacks understanding.


it ius akso worthwhile understanding what things like "military intelligence" , signals intelligence, business intelligence, intelligence tests, etc. mean.

"Organisation intelligence" is a very different concept to "organism intelligence".

"Business intelligence", for example, defines a dataset of information that is used by or useful to the business or its customers or partners.

The business is not intelligent, rather its employees are (and their Kindles :p)
 
It would seem that your Kindle Fire could hardly satisfying the dictionary definition of intelligence. It lacks understanding.




"Organisation intelligence" is a very different concept to "organism intelligence".

"Business intelligence", for example, defines a dataset of information that is used by or useful to the business or its customers or partners.

The business is not intelligent, rather its employees are (and their Kindles :p)



i think you eilk have to define understanding. My Fire understand far more about TV shows rhan i do for example.


you were talking aboht intelligence in layman's terms as yoy semmed to question that. Business, tgey make intelligent deciusions over data, beyond the intelligence of thhe individuals.
 
Let's try something else, for a second, to address what stewski just said.

I would anticipate that we are centuries away from being able to have an AI control a bipedal robot body, and for it to be indistinguishable from a human being (in all but appearance - so purely in behaviours and conversation).

That AI doesn't have to be housed in the robot body, it can be a giant distributed supercomputer if you like. I think we should mandate a robot body because of the thread title, and because human behaviour is more complex than just conversation.

So yeah - I would assert we are centuries away from even being able to convincingly "fake" human level intelligence, without sentience, et al.

There are other issues. Given how we seem to be approaching the limits of processor miniaturisation, there would also seem to be limits to the amount of intelligent robots you could have in operation. Energy budgets, data transmission speeds, etc, that would preclude them becoming as common as human beings. Or indeed common at all.
 
Let's try something else, for a second, to address what stewski just said.

I would anticipate that we are centuries away from being able to have an AI control a bipedal robot body, and for it to be indistinguishable from a human being (in all but appearance - so purely in behaviours and conversation).

That AI doesn't have to be housed in the robot body, it can be a giant distributed supercomputer if you like. I think we should mandate a robot body because of the thread title, and because human behaviour is more complex than just conversation.

So yeah - I would assert we are centuries away from even being able to convincingly "fake" human level intelligence, without sentience, et al.

There are other issues. Given how we seem to be approaching the limits of processor miniaturisation, there would also seem to be limits to the amount of intelligent robots you could have in operation. Energy budgets, data transmission speeds, etc, that would preclude them becoming as common as human beings. Or indeed common at all.



youve made something up from thin air. if yoy think we are centuries away then you will need to explain why, and account for tthe fact thst the biggest limition in mechanical and not cognitive. we arent discussion mechanics.


lets change domnaibs to an area where mechanics is not a limitation, say driving a car.
 
Last edited:
youve made something up from thin air. if yoy think we are centuries away then you will need to explain why, and account for tthe fact thst the biggest limition in mechanical and not cognitive. we arent discussion mechanics.

Far from it. Software can't yet fake human conversation for more than a few minutes. Fact.

e: I think a genuine replication of human behaviour is an asymptote that will never be reached. Think of it like Blade Runner. They will get more and more convincing over the years, but there will always be some facet of human intelligence that a machine just cannot convincingly mimic.

Today we are miles off. Convincing human mimicry is not a realistic goal for years.
 
Last edited:
Let's try something else, for a second, to address what stewski just said.

I would anticipate that we are centuries away from being able to have an AI control a bipedal robot body, and for it to be indistinguishable from a human being (in all but appearance - so purely in behaviours and conversation).

That AI doesn't have to be housed in the robot body, it can be a giant distributed supercomputer if you like. I think we should mandate a robot body because of the thread title, and because human behaviour is more complex than just conversation.

So yeah - I would assert we are centuries away from even being able to convincingly "fake" human level intelligence, without sentience, et al.

There are other issues. Given how we seem to be approaching the limits of processor miniaturisation, there would also seem to be limits to the amount of intelligent robots you could have in operation. Energy budgets, data transmission speeds, etc, that would preclude them becoming as common as human beings. Or indeed common at all.

For someone who has seemingly followed some discussion of AI, why are you ignoring the pace at which things appear to have changed in recent times?

I had a similar position to you during much of the 80's 90's right up to 00's ish.

However larger data sets and standardisation on linking of data and simple networking and it's large scale use (by humans) have changed my opinion.

Beyond the Turing test being bested, alongside achievements like IBM watson winning jeopardy against the worlds best players, Whilst I agree a portable AI that simultaneously fakes the function of human physicality as well as interaction is a far higher bar than the Turing Test, you should concede that it is already possible that a computer can "fake" intelligence as outlined by Turing, the rest is a matter of time IMHO, I'm not convinced of your timescales.
 
For someone who has seemingly followed some discussion of AI, why are you ignoring the pace at which things appear to have changed in recent times?

I had a similar position to you during much of the 80's 90's right up to 00's ish.

However larger data sets and standardisation on linking of data and simple networking and it's large scale use (by humans) have changed my opinion.

Beyond the Turing test being bested, alongside achievements like IBM watson winning jeopardy against the worlds best players, Whilst I agree a portable AI that simultaneously fakes the function of human physicality as well as interaction is a far higher bar than the Turing Test, you should concede that it is already possible that a computer can "fake" intelligence as outlined by Turing, the rest is a matter of time IMHO, I'm not convinced of your timescales.

The most recent attempts to fake human conversation that I saw were honestly terrible.

The Turing test, btw, is a very simple test and makes no claims about the ability of the system being tested to pass for human, even should it pass the Turing test.

All it requires is that 30% of the respondents not be able to discern the difference over a 5 min conversation. Human interaction tends to be more prolonged and more frequent.
 
Far from it. Software can't yet fake human conversation for more than a few minutes. Fact.

e: I think a genuine replication of human behaviour is an asymptote that will never be reached. Think of it like Blade Runner. They will get more and more convincing over the years, but there will always be some facet of human intelligence that a machine just cannot convincingly mimic.

Today we are miles off. Convincing human mimicry is not a realistic goal for years.

i would agree with the usage of the word yeasrs, as in less than decades.

the porogress in NLP, speech synthesis and recogbiution are developing exponentially fast.


but none if this is related to intelligence,wguch exists here and now
 
The most recent attempts to fake human conversation that I saw were honestly terrible.

The Turing test, btw, is a very simple test and makes no claims about the ability of the system being tested to pass for human, even should it pass the Turing test.

All it requires is that 30% of the respondents not be able to discern the difference over a 5 min conversation. Human interaction tends to be more prolonged and more frequent.

The Turing test is a test, developed by Alan Turing in 1950, of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.

https://en.wikipedia.org/wiki/Turing_test

Feel free to explain what you see as the failings of Turing's work in
Computing/Maths/AI at length, I'm afraid your complaints so far are not what I would call fully formed.
 
i would agree with the usage of the word yeasrs, as in less than decades.

the porogress in NLP, speech synthesis and recogbiution are developing exponentially fast.


but none if this is related to intelligence,wguch exists here and now

Then I better start using the phrase "mammalian intelligence", if any old tablet and gaming console is considered "intelligent" by the academic community (which in itself saddens me; it means we've set the bar for intelligence too low).

Let's face it, a dormouse is far superior in the intelligence stakes to an Amazon Kindle. Beyond dispute.
 
https://en.wikipedia.org/wiki/Turing_test

Feel free to explain what you see as the failings of Turing's work in
Computing/Maths/AI at length, I'm afraid your complaints so far are not what I would call fully formed.

Where have I done that?

I simply stated that the test that the recent "breakthroughs" have revolved around was not a good indicator of intelligence.

Many "chatbots" have claimed to pass the "Turing test", or variations thereof.

If you look at the transcripts of these "chat" you'll see how poor the imitation is.

Again, where am I mocking Turing's work? Many of these "Turing Tests" actually aren't his test, but a simplification/easier version of it anyhow.

e: https://www.theguardian.com/technol...-person-human-computer-robot-chat-turing-test

Allegedly 30% of judges couldn't tell the respondent was not a human. Which is nonsense. The AI responses were utterly unconvincing.

And that's what they call a "Turing beating" result. Shameful.
 
Last edited:
Where have I done that?

I simply stated that the test that the recent "breakthroughs" have revolved around was not a good indicator of intelligence.

Many "chatbots" have claimed to pass the "Turing test", or variations thereof.

If you look at the transcripts of these "chat" you'll see how poor the imitation is.

Again, where am I mocking Turing's work? Many of these "Turing Tests" actually aren't his test, but a simplification/easier version of it anyhow.

You said:
The Turing test, btw, is a very simple test and makes no claims about the ability of the system being tested to pass for human, even should it pass the Turing test.

All it requires is that 30% of the respondents not be able to discern the difference over a 5 min conversation. Human interaction tends to be more prolonged and more frequent.

The opening to the wikipedia article says:
The Turing test is a test, developed by Alan Turing in 1950, of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.

Later you complain about the chat manuscripts to "supposed" winners.

If you have an academically valid argument against either the turing test as a measure of machine "intelligence" or the quality/validity of the results outlined in the suggested 2015 beating of the turing test I made early, make it, so far you are just posting noise IMHO.
 
What I said was clear.

Look at the transcripts of the supposed "Turing beating" software.

You tell me if that's a good imitation of human conversation. I'm not interested here in Turing beyond the testing that bears his name. Regardless of whether they used his test in the way he intended or not, these programs claim to pass the Turing test.

But the actual transcripts of the conversations are really, really obviously not human.

I'm not going to discuss with you Turing or his life's work. The only thing up for discussion here is whether software can imitate human conversation satisfactorily.

And if not, how much more work there is to do. Because based on the transcripts, we're not even 5% there yet.

e: Also I must point out something I find ironic. You said "Do not discount the leaps and bounds we have made in the last 2 years". Then say that a test devised in the 1950s is the final arbiter of whether a machine is intelligent or not.

Does that not strike you as a contradiction?
 
What I said was clear.

Look at the transcripts of the supposed "Turing beating" software.

You tell me if that's a good imitation of human conversation. I'm not interested here in Turing beyond the testing that bears his name. Regardless of whether they used his test in the way he intended or not, these programs claim to pass the Turing test.

But the actual transcripts of the conversations are really, really obviously not human.

I'm not going to discuss with you Turing or his life's work. The only thing up for discussion here is whether software can imitate human conversation satisfactorily.

And if not, how much more work there is to do. Because based on the transcripts, we're not even 5% there yet.

Are you suggesting you've spotted the fatal flaw in the 2015 tests that presumably academia over looked.

Yes you have a link to some chat logs, no I do not see that as the complete picture of the test as it was conducted.

You also implied that the Turing test has nothing to do with establishing a computers ability to pass for human.

The Turing test, btw, is a very simple test and makes no claims about the ability of the system being tested to pass for human, even should it pass the Turing test.

You have continuously shifted goal posts and now attack a mainstream academic result on the basis of what?
 
Back
Top Bottom