Human Rights For Robots.

What an absolutely ridiculous idea and assumption. Deranged thoughts like this are what are destroying humanity.

Rape declines because rapist can get ****ed off by a machine? Seriously? All this will do is create a generation of people who have their natural needs taken care of by a robot, as if we don't have enough ****ers already. Rapists will continue raping ffs. One of the deficiencies in a rapist is empathy. A lack of learning/being taught to respect PEOPLE and their wishes. How on earth does replacing the person with a robot fix this???

In fact your idea will create more rape because there will be swathes of people who will not know real people/women. This is just separating men and women, with a wedge made of AI.

Rape is typically about control not sex or empathy.
 
wrong and then your conjecture, self reprogramin software already exists. So no, once initially programmed then it can self learn and change itself, where do we see that sort of thing but millions of times more complex, oh wait the brain. Which is exactly what i've been saying.
and no they cant do that because the are still extremely tiny and simple. I mean even the current syanpses chip is like 16million neurons, rather than the roughly 86bn neurons in a human, ontop of that is the connection complexity, each having 250 compared to the roughly 10,000 in humnas. But its devloping quickly we were at like 1000 neuron chips in ~2010

so you are making assumptions based on current technology. With nothing on why it isn;t possible,
But at least you now agree we do come pre programmed.

I don't share your enthusiasm, mainly because the mind is not even close to being understood. There are hints of quatum effects at the molecular level of cells so we won't be able to replicate that with our binary computers, no matter how much we improve them. Besides, the progress of technology will reach a point where we will have humans fully linked to machines which will make autonomous sentinent machines redundant. Unless of course you assume there exists a higher form of intelligence only machines can reach but, in the absence of evidence, that concept is a sibling of the Giant Spaghetti Monster.
 
....
Unless of course you assume there exists a higher form of intelligence only machines can reach but, in the absence of evidence, that concept is a sibling of the Giant Spaghetti Monster.

Yeah I was going to say belief in the "singularity" is actually "faith".

"I believe that the singularity will happen and machines will become sentient" - for there is actually no evidence to support such a prediction.

And your last point is worth re-stating.

As you said, belief that a sentient AI would be a superior intelligence to humans is even less based in any kind of actual evidence. Thus far, we know that machines can evaluate human-defined problems faster than a human can.

But we do not know that machines would be faster than a biological brain at any kind of human-level cognition.

Think about it in terms of emulation. A very fast PC cannot emulate a PS2/PS3 faster than an actual PS3 can execute its own native code.

A very, very, very fast super computer may not be capable of emulating a human brain (or equivalent) faster than a real human brain can evaluate its native sensory inputs and mental processes.

And that may come down to the laws of physics. It may be that our human brain is as efficient as can be. Depending on your beliefs, either through millions of years of refining evolution, or intelligent design.
 
Last edited:
No I don't think we are pre-programmed. The brain develops differently according to the environment you grow up in. Additionally, the brain is "elastic", and can be trained.

You could say that environment contributes to our "programming", but we are not "pre-programmed" by any means.

That's just as bad as believing everything is "fate".

It's pretty sad that some people can't see the difference between a human being and a piece of software, Glaucus.

you are describing machine learning 101.
 
I don't share your enthusiasm, mainly because the mind is not even close to being understood. There are hints of quatum effects at the molecular level of cells so we won't be able to replicate that with our binary computers, no matter how much we improve them. Besides, the progress of technology will reach a point where we will have humans fully linked to machines which will make autonomous sentinent machines redundant. Unless of course you assume there exists a higher form of intelligence only machines can reach but, in the absence of evidence, that concept is a sibling of the Giant Spaghetti Monster.

yoy dont need to understand hoe the mind works to create intelligence.

there is no evidence of any quantum effectfs in tbge brain,tbst uus jus pseudoscience. even if tbgere was progress in quantum computers has been very strong recently. There are seceral comoanies selling bona fide quantum computers. At thr moment thry are just slow so used for research
 
Yeah I was going to say belief in the "singularity" is actually "faith".

"I believe that the singularity will happen and machines will become sentient" - for there is actually no evidence to support such a prediction.

And your last point is worth re-stating.

As you said, belief that a sentient AI would be a superior intelligence to humans is even less based in any kind of actual evidence. Thus far, we know that machines can evaluate human-defined problems faster than a human can.

But we do not know that machines would be faster than a biological brain at any kind of human-level cognition.

Think about it in terms of emulation. A very fast PC cannot emulate a PS2/PS3 faster than an actual PS3 can execute its own native code.

A very, very, very fast super computer may not be capable of emulating a human brain (or equivalent) faster than a real human brain can evaluate its native sensory inputs and mental processes.

And that may come down to the laws of physics. It may be that our human brain is as efficient as can be. Depending on your beliefs, either through millions of years of refining evolution, or intelligent design.



why would you want to emulate a human when you could develop something far faster and supior. why bother emulating a PS2 on a PC when you couikd just have PS4
 
No. We don't come pre-programmed.

FoxEye is correct, he is talking about neuroplasticity.

We are only partially "pre programmed", things like organs, but human brains are very large and highly evolved. Neural pathways are CONSTANTLY changing. You can have one thought and it can physically change your brain's neural pathways.

in the same way any ANN works,fancy that.
 
Genetics. We are born with machinery that allows us to adapt and survive. It is not pre-programmed though because DNA can not carry that information. Who we are is what we learn as we go through life using the machinery of genetics as we go. A new born baby can not do mathematics but as it learns life skills it acquires that knowledge. Not pre-programmed but learned behaviour. Big difference.

Thr ability and methods of learming is preprogrammed.
 
There are things which are common to all of us, such as the instinctive reaction to pulling your hand away from a naked flame.

The line between genetic "programming" and "learned behaviour", however, is not necessarily easy to spot.

There are some behaviours that are common to all of us born and raised in this country (ie, the West), that might not be found in a feral child raised by monkeys in the jungle (extreme example).

Given that the brain is experiencing stimuli in the womb, and therefore learning, and the brain is constantly being altered (physically) by stimuli, it might be very easy to assume that babies are born with "pre-programmed instincts"... when these might actually be learned behaviours that just happen to be common to all of us, as we all spent 9 months (give or take) in the womb.

But let's get back to AI, here. You said that we're "pre-programmed" to breathe, and to react to pain, etc. And we'd all agree, I hope, that those are very basic (tho vital) responses/behaviours.

Our more complex, sophisticated behaviours, are learned. You do not acquire the ability to drive through any instinct. Or to play guitar. Or to solve equations. All this is learned, studied, and perfected.

A piece of software can be programmed to perform a task that its human designer knew how to define in precise terms.

A computer can never learn to do anything if there is no human around to precisely define the problem and the method of its solution.

A computer cannot gaze at the stars, observe a falling apple, and make theories about the world it exists in.

This conversation to me only highlights the gulf between man and machine.

you clearly have no idea about software.
 
yoy dont need to understand hoe the mind works to create intelligence.

there is no evidence of any quantum effectfs in tbge brain,tbst uus jus pseudoscience. even if tbgere was progress in quantum computers has been very strong recently. There are seceral comoanies selling bona fide quantum computers. At thr moment thry are just slow so used for research

There's plenty of literature on the subject but the picture is not clear, which is why I used the word "hints". If quantum effects provide advantages, forms of life would certainly use them.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569494/

http://www.sciencedirect.com/science/article/pii/S0003491615003243

https://www.cambridge.org/core/jour...-proteinsdiv/F2EEEB55393DEED1E771328AD96F36EB


As for creating intelligence without understanding.. intelligence, good luck with that - it's as likely as a monkey playing Beethoven's 5th.
 
you clearly have no idea about software.

Oh great, another Glaucus.

Tell me, why should any of us bother to reply to your one-liners? You can't further a discussion with one-line put-downs.

In the world of academia, no one behaves like this and expects to be taken seriously.

A piece of software can be programmed to perform a task that its human designer knew how to define in precise terms.

I've never seen a piece of software that solved a problem its human creator did not give it the capability and the method to solve. Have you? Have you seen software built for facial recognition, that instead chooses to operate the lights instead?

A computer cannot gaze at the stars, observe a falling apple, and make theories about the world it exists in.

Have you seen software which shows any "understanding", or that shows signs of developing "understanding", DP? Have you seen software that is "curious" about its environment? Have you seen software attempt to learn from data that it wasn't supposed to be working with?

When you write software, when do you not control the dataset that the software works on? When is that dataset "the universe", unbounded by constraints? When is the software given the dataset with no instructions on what to do with it?

You say I have no understanding of software, and who would know better than you, eh? Actually I've programmed in everything from Pascal to Prolog, assembler, Javascript, C, SQL, Haskell, Python and others*. *(Not commercially, and I make no claims to be any good at it, compared to a professional developer. Nor do I claim to know much about best practice. But I can make a computer say "Hello world" just fine :p)
 
Last edited:
why would you want to emulate a human when you could develop something far faster and supior. why bother emulating a PS2 on a PC when you couikd just have PS4

in the same way any ANN works,fancy that.

What is an artificial neural network if not an emulated physical network?

The brain is a physical neural net. Billions of neurons firing in parallel. Billions upon billions of connections and pathways, working independently but coherently - and crucially, simultaneously.

Your software neural net can have thousands of artificial "neural pathways" but they have to be processed in some kind of order by the machine that's emulating them, for the data to make any sense. At worse case, in serial. Even if that's a supercomputer it's still nowhere near capable of the feats of parallel processing that the brain does every microsecond.

Is it not emulation?
 
Oh great, another Glaucus.

Tell me, why should any of us bother to reply to your one-liners? You can't further a discussion with one-line put-downs.

In the world of academia, no one behaves like this and expects to be taken seriously.



I've never seen a piece of software that solved a problem its human creator did not give it the capability and the method to solve. Have you? Have you seen software built for facial recognition, that instead chooses to operate the lights instead?



e :p)



Iirc dp is employed developing AI systems.


Anyeay on the second point there have besn experiments where they had little robots on a table that had panels thst where powered to provide "food" they could use lights to signsl each other snd could "breed".

After loads f generarions some started to show degiant behaviour instead og signalling food to the others they would go to switched off pannels and signal the others there was food there.
Then go nick the food.


The crwators where aiming for cooperation not competition
 
What is an artificial neural network if not an emulated physical network?

The brain is a physical neural net. Billions of neurons firing in parallel. Billions upon billions of connections and pathways, working independently but coherently - and crucially, simultaneously.

Your software neural net can have thousands of artificial "neural pathways" but they have to be processed in some kind of order by the machine that's emulating them, for the data to make any sense. At worse case, in serial. Even if that's a supercomputer it's still nowhere near capable of the feats of parallel processing that the brain does every microsecond.

Is it not emulation?

A human brain has a hideous amount of overheads though that a machine brain wouldnt need.

A maxhine brain could have hugley less neurons but out perform a human brain simply because it doesnt need to manage a complicated body
 
I don't share your enthusiasm, mainly because the mind is not even close to being understood.

I don't share their enthusiasm either, but I think your reply might well be mistaken. A full understanding of how something works isn't necessary for making it. It's possible to make something with no understanding at all of how it works. For example, humans >15,000 years ago had no understanding of how genes work or even that genes exist but they were still able to genetically engineer animals through selective breeding in order to create dogs and different breeds of dogs. Stone age humans had no understanding of how smelting works or how alloying works but they were able to reliably do both. Bronze age (and possibly stone age) humans had no understanding of bacteria or even any knowledge that bacteria existed or even that any such thing could exist, but they still made and used treatments using penicillin. Etc.

So I think it's possible that humans might at some point make a mind without anything close to an understanding of how a mind works. Or even what a mind really is.
 
Iirc dp is employed developing AI systems.


Anyeay on the second point there have besn experiments where they had little robots on a table that had panels thst where powered to provide "food" they could use lights to signsl each other snd could "breed".

After loads f generarions some started to show degiant behaviour instead og signalling food to the others they would go to switched off pannels and signal the others there was food there.
Then go nick the food.

The crwators where aiming for cooperation not competition

Highly complex algorithms giving rise to evolving behaviours are still not sentient, nor do they understand what they're doing.

I'm not calling into question the cleverness of people working on systems that "learn" or "evolve", etc, etc.

I'm just saying that their increasingly sophisticated software is, and never will be, intelligent, sentient, or self-aware.
 
Highly complex algorithms giving rise to evolving behaviours are still not sentient, nor do they understand what they're doing.

I'm not calling into question the cleverness of people working on systems that "learn" or "evolve", etc, etc.

I'm just saying that their increasingly sophisticated software is, and never will be, intelligent, sentient, or self-aware.

You didnt say anythimg about sentience you said

I've never seen a piece of software that solved a problem its human creator did not give it the capability and the method to solve

Which is exacrly what they did.


But your last point about them never being intelugent or self aware only makes sense if you belive in god or some other divine being making man sentient.

After all if random chance can make sentience intelligent design should be able to.
 
Back
Top Bottom