Forums / Discussion / General

169,603 total conversations in 5,527 threads

+ New Thread


Robot Rights

Last posted Dec 18, 2012 at 02:50PM EST. Added Dec 15, 2012 at 01:47PM EST
50 posts from 20 users

I just had a heated debate with my sister about the topic of this thread and asked her the following question: if there were a world exactly like ours run on a computer with perfectly simulated human beings, would it be wrong to delete them? Her answer was no.
.
So what constitutes genocide? According to her, it is not murder unless the victim has a corporeal body made out of flesh and blood. But what do you think? If it were absolutely impossible to tell the difference between our inventions and our offspring, at least on a mental level, do they deserve the same rights?
.
In fifty years, are you going to be one of those right wing nutjobs holding a sign stating that “Darwin hates androids” and picketing robot funerals? The next generation of bigots could be you!

Dec 15, 2012 at 01:47PM EST

For me, the root of the problem is if they qualify as Life, which according to current scientific standing, is as follows: Homeostasis, Organization, Metabolism, Growth, Adaptation, Response to stimuli, Reproduction.

The second problem is sentience. Are they totally sentient, partially sentient, sentient at all?

Personally, if they have full sentience, and at least five of the Processes of Life, I think they sit as equals. As such, I believe that would entitle them to the basic human and civil rights.

Dec 15, 2012 at 02:02PM EST
Quote

It is difficult to define a robotic life by the way we define biological life. Metabolism, reproduction, and growth I cannot see being ever a necessary component of robotics, unless they “improve” themselves striving to become “more human” like certain androids in fiction do.
.
As for sentience, the short form of it is simply to be aware of one’s environment and have the environment effect sensations which change one’s behavior. By that definition, a burglar alarm is sentient.
.
For me, an equal with humans so far as rights go would have to fulfill the following requirements:
1. Communication – can they tell us what they need or want?
2. Sentience – are they aware of the things around them?
3. Intelligence – can they solve problems or be rational?
4. Autonomy – do they have free will?
.
If you can answer ‘yes’ to all four, then you have an individual deserving human rights in my book.

Dec 15, 2012 at 02:21PM EST

Syndic wrote:

It is difficult to define a robotic life by the way we define biological life. Metabolism, reproduction, and growth I cannot see being ever a necessary component of robotics, unless they “improve” themselves striving to become “more human” like certain androids in fiction do.
.
As for sentience, the short form of it is simply to be aware of one’s environment and have the environment effect sensations which change one’s behavior. By that definition, a burglar alarm is sentient.
.
For me, an equal with humans so far as rights go would have to fulfill the following requirements:
1. Communication – can they tell us what they need or want?
2. Sentience – are they aware of the things around them?
3. Intelligence – can they solve problems or be rational?
4. Autonomy – do they have free will?
.
If you can answer ‘yes’ to all four, then you have an individual deserving human rights in my book.

Metabolism, Reproduction, and Growth may be more loosely defined in this situation.

Metabolism may just refer to the use and management of energy or basic materials, even if that just amounts to use of electricity.

Reproduction could simply be the construction of a new robot. I don’t know if there are any laws regarding specific reproduction.

Growth is the most unlikely, I agree, but the ability to take in “upgrades” or new “parts” may fill in between metabolism and adaptation for logical purpose.

As for sentience, I’ll retract my prior statement and replace it with Complex Sentience. Are they capable of recognizing and learning, as well as being able to predict a situation.

Last edited Dec 15, 2012 at 02:36PM EST
Dec 15, 2012 at 02:34PM EST
Quote

Cale wrote:

For me, the root of the problem is if they qualify as Life, which according to current scientific standing, is as follows: Homeostasis, Organization, Metabolism, Growth, Adaptation, Response to stimuli, Reproduction.

The second problem is sentience. Are they totally sentient, partially sentient, sentient at all?

Personally, if they have full sentience, and at least five of the Processes of Life, I think they sit as equals. As such, I believe that would entitle them to the basic human and civil rights.

Well that’s a bit messed up. So if they’re just a bit short in the Homeostasis department then they can be exploited as slaves? That’s a pretty thin line to be drawing here.

I don’t have any definitive answers on the subject (And I don’t think anyone does), but it’s an interesting question, and may be an issue in generations to come. Assuming we haven’t all slaughtered each other by then, of course.

Dec 15, 2012 at 03:33PM EST
Quote

This is similar to when people talk about killing people in computer games and relating that to how you might feel in real life.

Genocide’s wrong, but robots are reconstructable, they don’t have feelings and they aren’t living and thinking (for themselves).

Whoops, forgot to inb4 Mex.

Last edited Dec 15, 2012 at 04:52PM EST
Dec 15, 2012 at 04:25PM EST
Quote

I say give them equal rights. It’ll make it less likely they revolt against us in the future.

Dec 15, 2012 at 04:26PM EST
Quote

I don’t even know what the fuck I should say about this. No, you shouldn’t delete them.

Dec 15, 2012 at 05:45PM EST
Quote

量子 Meme wrote:

This is similar to when people talk about killing people in computer games and relating that to how you might feel in real life.

Genocide’s wrong, but robots are reconstructable, they don’t have feelings and they aren’t living and thinking (for themselves).

Whoops, forgot to inb4 Mex.

Humans are reconstructible (Well, reproducible), the robots we’re referring to have feelings and can think for themselves, and life is loosely definable.

Dec 15, 2012 at 06:00PM EST
Quote

This debate hinges on whether or not it is true that we are computer simulated beings ourselves.

If that theory is false which I can assert it is with a 0.01% margin of error, then the argument is moot and we can continue regarding simulated structures as nothing more than artificial machinery that carries no ethical burden. Delete away

If it’s true then we definitely cannot delete as we cannot value the simulation any greater than ourselves.

I’d explain my reasoning better but this is way too freaking meta. My mind is already getting ready to explode just from thinking about it. I’m afraid I can provide no further insight, lest I splatter my brains on the walls.

Dec 15, 2012 at 06:05PM EST
Quote

@QuantumMeme: No, video game characters are more like the prokaryotes of the robotics world. i am talking about hypothetical machines that are actually capable of the complex human thoughts and emotions that we experience. If technology continues along the path it has been on, we may very well see such a machine as early as 2020.

Dec 15, 2012 at 06:06PM EST

Blue Screen (of Death) wrote:

This debate hinges on whether or not it is true that we are computer simulated beings ourselves.

If that theory is false which I can assert it is with a 0.01% margin of error, then the argument is moot and we can continue regarding simulated structures as nothing more than artificial machinery that carries no ethical burden. Delete away

If it’s true then we definitely cannot delete as we cannot value the simulation any greater than ourselves.

I’d explain my reasoning better but this is way too freaking meta. My mind is already getting ready to explode just from thinking about it. I’m afraid I can provide no further insight, lest I splatter my brains on the walls.

>implying life has to be human to be valuable

Dec 15, 2012 at 06:58PM EST
Quote

Pseudogenesis wrote:

Well that’s a bit messed up. So if they’re just a bit short in the Homeostasis department then they can be exploited as slaves? That’s a pretty thin line to be drawing here.

I don’t have any definitive answers on the subject (And I don’t think anyone does), but it’s an interesting question, and may be an issue in generations to come. Assuming we haven’t all slaughtered each other by then, of course.

It doesn’t matter which of the Processes they lack/do not have, but if they have less than five, they simply aren’t living creatures, and I don’t see any reason to treat them as such.

Dec 15, 2012 at 07:06PM EST
Quote

@Pseudo

Wut? I said that a mechanical/cybernetic machine that only virtually represents our universe will remain a mechanical/cybernetic virtual construct no matter how accurately it reflects our current universe until it is proven that our virtual construct can be compared to our own as an actual tangible universe

In other words: If it is proven that our own universe is in itself a virtual construct then another virtual construct of a universe within it must be valued equally. If not, then we don’t need to assume a virtual universe we create has the same level of importance (Not that you would want to dispose of it so easily anyway since creating a virtual universe would be a fascinating technological breakthrough in software and you would want to watch it develop)

This is not that same as suggesting that we don’t have to care about anything that isn’t strictly human. I care about wildlife too you know. But it is suggesting that we don’t have to care about virtual non-bioligical machines

Dec 15, 2012 at 07:20PM EST
Quote

We, ourselves, cannot define what is living and what is not. Just because we are the dominant species at the current time does not mean we know everything about life. We aren’t one of them. We don’t know what it’s like. That’s why it’s so convenient to just dismiss the idea.

I feel as though they do. If they have free will, they’re alive. That’s all there is to it. So no, I don’t believe it is ethical to just delete them.

Dec 15, 2012 at 08:36PM EST
Quote

@BSoD Ok, thanks for clarifying.

@Kalmo

If they have free will, they’re alive.


That’s a pretty good judge of sentience. The problem, however, is deciding if anything has free will. :p

Last edited Dec 15, 2012 at 09:16PM EST
Dec 15, 2012 at 09:15PM EST
Quote

Pseudogenesis wrote:

@BSoD Ok, thanks for clarifying.

@Kalmo

If they have free will, they’re alive.


That’s a pretty good judge of sentience. The problem, however, is deciding if anything has free will. :p

Well, if we’re going by a non-deist, non-determinist standpoint, then it’d be yes, we all do.

Dec 15, 2012 at 10:00PM EST
Quote

Blue Screen (of Death) wrote:

@Kalmo

Do software programs have free will?

Hmm… Well, going by a theological standpoint, you could say we’re software.

God designed us and gave us certain functions to abide by. Yet we have the capability to do our own thing.

So sure, maybe. Although it is highly debatable, since it would only do a function under certain requisites. It would have to be randomized to be allowed to problem solve in more ways than one and decide it’s own way. But, at the same time it would be difficult to give a software program the ability of sentience.

But I say anything is possible. So sure, maybe one day they will!

Dec 15, 2012 at 10:29PM EST
Quote

@Kalmo: Lots of computer programs have free will. Some video games, I hate to say, are capable of devising lots of complicated and unique strategies to defeat a player. However, video games are not self aware, nor are they intelligent, nor can they communicate with us in a culturally meaningful way.

@MDF:

Dec 15, 2012 at 11:22PM EST

Pseudogenesis wrote:

@BSoD Ok, thanks for clarifying.

@Kalmo

If they have free will, they’re alive.


That’s a pretty good judge of sentience. The problem, however, is deciding if anything has free will. :p

This is a key point. It is easy to say that robots lack free will because we built them and we can explain their behaviours as a set of pre-programmed logic routines. A robot may be programmed to have the appearance of free will, but somebody could point to a source code listing and say that the robot’s actions are because of X, Y, and Z.

The human thought process can be reduced to chemical reactions. Free will is probably an illusion caused by the fact that the human mind is so complex that we do not understand it yet. We have made great strides in understanding human thought processes over the past few centuries. Within the next 100 years, we may have the human mind completely mapped out. Any human behaviour could then be explained by saying that the person’s reactions are because of X, Y, and Z. This is already happening, with varying degrees of accuracy, in psychology and physiology.

At what point do we say that an artificial intelligence becomes sentient? When it can argue on its own behalf? Somebody could probably hack Eliza to spit out “I have free will” every so often, but that would not make it so. Or when it passes the Turing test? We will probably have such an AI within fifteen years, and there are probably people that can fail that test.

When we do draw a line to say that robots on one side have rights and those on the other do not, there may be unexpected consequences. Consider animals, which we kill for food or to reduce overpopulation. Consider children. They may fall on the different side of the line from what we might expect.

Dec 15, 2012 at 11:30PM EST
Quote

Kalmo wrote:

Hmm… Well, going by a theological standpoint, you could say we’re software.

God designed us and gave us certain functions to abide by. Yet we have the capability to do our own thing.

So sure, maybe. Although it is highly debatable, since it would only do a function under certain requisites. It would have to be randomized to be allowed to problem solve in more ways than one and decide it’s own way. But, at the same time it would be difficult to give a software program the ability of sentience.

But I say anything is possible. So sure, maybe one day they will!

I’ve always explained the problem of free will by describing time as an equation. You have all the elements, so say, “X [shite tons of added and multiplied constants] = Y”, where X is the input and Y is the outcome. That’s the way humans work to an extent, and if you look at the universe in the same way it’s an equation as well. You could talk about choice and sentience all you want, but if you rewound life to a few minutes back, wouldn’t you do the same thing? The choice is all up to you, but the reason you made that choice is rooted in the present, which is a function of the past. And ultimately the past was decided by the past. It’s an infinite regression.

I love thinking about this stuff. Lots of math analogies as you can tell, even though I never was good at it. :S

Dec 15, 2012 at 11:35PM EST
Quote

Syndic wrote:

@QuantumMeme: No, video game characters are more like the prokaryotes of the robotics world. i am talking about hypothetical machines that are actually capable of the complex human thoughts and emotions that we experience. If technology continues along the path it has been on, we may very well see such a machine as early as 2020.

Ah, alright, then it’s exactly the same to genocide of humans, therefore wrong…

Dec 16, 2012 at 06:10AM EST
Quote

Have we ever fought for the rights of viruses?

No, because A: they aren’t considered living and B: they’re dangerous to human life.

However, they do have RNA/DNA, a protein coat and such. So even though they aren’t technically living, they do share some characteristics.

Us humans will fight for the rights of anything, though, won’t we?

Last edited Dec 16, 2012 at 11:37AM EST
Dec 16, 2012 at 11:34AM EST
Quote

Dr. Coolface wrote:

What if God is a robot?

Funny you should mention that. Issac Asimov wrote a story called The Last Answer that ends on a similar note. It’s a very good read, you all should give it a look.

Last edited Dec 16, 2012 at 12:42PM EST
Dec 16, 2012 at 12:41PM EST
Quote

This is not a black and white question. Many things have to be considered. Do they have sentience and free will, or are they just programs? Are they as intelligent as a human or as another animal? Do they have emotions? Can they feel pain? Essentially, are they human?

If they are not, who cares? It’s just a computer program. I’m sure you’ve killed enemies in Call of Duty before. Did you feel remorse?

On the other hand, if they are human-like it’s a bit more iffy. They still technically aren’t “real” so to speak, but they are sapient. This doesn’t really have a straightforward answer.

Dec 16, 2012 at 12:53PM EST
Quote

Of course robots could have human feelings.

Need I remind you all of when you nearly broke down into tears when a certain animated robot almost didn’t make it back from his adventure aboard the Axiom?

Dec 16, 2012 at 02:24PM EST
Quote

CLYDE (Joe's Nightmare) wrote:

Have we ever fought for the rights of viruses?

No, because A: they aren’t considered living and B: they’re dangerous to human life.

However, they do have RNA/DNA, a protein coat and such. So even though they aren’t technically living, they do share some characteristics.

Us humans will fight for the rights of anything, though, won’t we?

That’s because they aren’t complex enough to have evolved emotion or pain. We’re talking beings that have emotion, pain, and sentience. All they lack to be humans is a biological system of operation. That’s too thin of a line to build a standard of morality on.

Last edited Dec 16, 2012 at 03:57PM EST
Dec 16, 2012 at 03:54PM EST
Quote

If you don’t want robots to revolt, don’t make one that can think for itself. Of course, that’s assuming that robots can have emotions. I’m not sure if they can, and it would be pointless to make one with emotions. No emotionless robot will rebel.

But in the end it seems like robots can only run programs. Robots that only run programs aren’t sentient, and have no rights. I doubt something like Short Circuit or Terminator can happen because computers don’t have the same kind of capacities we do.

Dec 16, 2012 at 09:39PM EST
Quote

Pseudogenesis wrote:

That’s because they aren’t complex enough to have evolved emotion or pain. We’re talking beings that have emotion, pain, and sentience. All they lack to be humans is a biological system of operation. That’s too thin of a line to build a standard of morality on.

… Robots have pain?

Dec 17, 2012 at 07:13AM EST
Quote

Emotions are a chemical response to stimuli. Knowledge of damage to your body and that it is negative is what we call pain. It is possible for machines to have feelings, provided that self-diagnostics are programmed in.


That being said, I once read that if you show a machine a table, then rotate it slightly, it recognizes the table as a completely new obstacle. We haven’t programmed in basic pattern recognition into these things yet, just what we need them to do. I’d say that while it’s possible for machines to achieve sentience, them doing so is so inherently difficult an undertaking that nobody in this thread will be alive to see it.

Dec 17, 2012 at 08:31AM EST
Quote

MDFification wrote:

Emotions are a chemical response to stimuli. Knowledge of damage to your body and that it is negative is what we call pain. It is possible for machines to have feelings, provided that self-diagnostics are programmed in.


That being said, I once read that if you show a machine a table, then rotate it slightly, it recognizes the table as a completely new obstacle. We haven’t programmed in basic pattern recognition into these things yet, just what we need them to do. I’d say that while it’s possible for machines to achieve sentience, them doing so is so inherently difficult an undertaking that nobody in this thread will be alive to see it.

Emotions and full consciousness are a bit broader than just chemicals. They’re too complicated for us to completely understand. Theoretically one could program robots with artificial emotions, but the robots will still just be running programs.

Dec 17, 2012 at 09:33AM EST
Quote

Katie C. wrote:

Emotions and full consciousness are a bit broader than just chemicals. They’re too complicated for us to completely understand. Theoretically one could program robots with artificial emotions, but the robots will still just be running programs.

With full consciousness, you have a point. We don’t understand that. Emotions are a different matter. Emotions are a chemical response from our brain, and why we have them is pretty well understood. It’s a way to provide incentive for behaviors like avoiding damage to your body, keeping groups of people together, etc.
Our unconscious minds are where emotions come from the majority of the time, so the person experiencing the emotion doesn’t know exactly what it’s meant to tell them. But from a neurological perspective emotions are pretty well understood, and simply programming a machine to understand the significance of self-diagnostics or changes to its environment would give it emotions.

Dec 17, 2012 at 09:52AM EST
Quote

Blue Screen (of Death) wrote:

@Kalmo

Do software programs have free will?

Certain Operating Systems (coughvistacough) seemed to.

Dec 17, 2012 at 01:32PM EST
Quote

PhoenixBlitzkrieg wrote:

Certain Operating Systems (coughvistacough) seemed to.

cough It was a driver problem and not a problem with Vista cough

Dec 17, 2012 at 01:38PM EST
Quote

CLYDE (Joe's Nightmare) wrote:

But why are we programming robots with emotions in the first place?

My point exactly.

With a robot, you just need it to provide a service, and run its program.

Dec 17, 2012 at 03:39PM EST
Quote

CLYDE (Joe's Nightmare) wrote:

But why are we programming robots with emotions in the first place?

Because it’s cool. That’s the unofficial justification for many scientific endeavors.

Dec 17, 2012 at 07:59PM EST
Quote

I have a good reason: Robot actors!

That way Hollywood can cash in on a remake of Short Circuit, a remake of 2001, a remake of Bicentennial Man, AI, I Robot, etc.

Dec 17, 2012 at 08:22PM EST
Quote

What about friends? People already pay money for dogs and cats as companions, why not have a “pet” you can talk to? That’s a good reason. Then, you could also use androids as in-house doctors, therapists even, for patients that really need constant attention. I can see androids filling lots of social work opportunities that require real personality and emotion.

Last edited Dec 17, 2012 at 08:49PM EST
Dec 17, 2012 at 08:37PM EST

Syndic wrote:

What about friends? People already pay money for dogs and cats as companions, why not have a “pet” you can talk to? That’s a good reason. Then, you could also use androids as in-house doctors, therapists even, for patients that really need constant attention. I can see androids filling lots of social work opportunities that require real personality and emotion.

That… is actually quite an incredible prospect. Assuming it wouldn’t be demeaning for the robot.

Dec 18, 2012 at 01:39AM EST
Quote

Insouciant Insect wrote:

Of course robots could have human feelings.

Need I remind you all of when you nearly broke down into tears when a certain animated robot almost didn’t make it back from his adventure aboard the Axiom?

I’m not getting involved in this debate, simply because everything that could be said has been said, but I will say that if that’s the basis of your argument, you may want to rethink it a bit. Fictional work can never be considered evidence for anything, because it’s entirely up to the imagination of the creator for what happens.

Dec 18, 2012 at 02:06AM EST
Quote

Syndic wrote:

What about friends? People already pay money for dogs and cats as companions, why not have a “pet” you can talk to? That’s a good reason. Then, you could also use androids as in-house doctors, therapists even, for patients that really need constant attention. I can see androids filling lots of social work opportunities that require real personality and emotion.

That sounds like servitude…

Which is dangerously close to slavery…

Dec 18, 2012 at 07:45AM EST
Quote

I’ll put it this way. If it is sentient, but is inherently designed to be removed quickly, cleanly and with the intention of upgrades in a way that is directly or indirectly by a human controller (the upgrade is designed still by a human, we don’t have programs programing to a real capacity yet) then they have no right. Even if the program has feelings, is sentient and so on does not necessarily mean that it is alive, nor does it mean that it is not. It is a grey area of definition that you can expoit to the point of genocide. There will a fight out there, but it all depends on your definition of life.

If you go about it, you could also insinuate that minds are like programs, in essence, impossible to “kill”. They live on wether or not they are removed from their home of residence, however this only applies if you believe in such things. What your asking basically is currently a non-issue and will probably remain as such for I’d say at least 5-10 years.

I would have to say that it is not genocide, since a living thing or sentient thing should be able to replicate or in some manner attempt to do so. There are no programs which do this and are both. Viruses self replicate, but are not sentient, sentient programs are still unable to self replicate, but there is a program that can maintain it self by programming for itself (however it is not self aware as far as we can tell, it has no idea that it is programming for itself…again, as far as we are aware).

Dec 18, 2012 at 08:51AM EST
Quote

Teh Brawler wrote:

I’m not getting involved in this debate, simply because everything that could be said has been said, but I will say that if that’s the basis of your argument, you may want to rethink it a bit. Fictional work can never be considered evidence for anything, because it’s entirely up to the imagination of the creator for what happens.

Fiction tends to be a reflection of human nature though. It’s obviously not as intrinsically “correct” as hard fact, but that doesn’t mean it’s useless in determining truth. 1984 was entirely fiction, and it represents a very conceivable reality.

Edit: But yeah, using fiction as the entire basis for a philosophical perspective will not earn you any points.

Last edited Dec 18, 2012 at 12:15PM EST
Dec 18, 2012 at 12:14PM EST
Quote

Cale wrote:

That sounds like servitude…

Which is dangerously close to slavery…

I’m not advocating it, I’m just saying what I think will happen. Besides that, one could program the machine to enjoy serving in such a way. But then, would that be the same as drugging your employees so that they enjoy working long hours for low pay?

Dec 18, 2012 at 02:50PM EST
Skeletor-sm

This thread is closed to new posts.

Old threads normally auto-close after 30 days of inactivity.

Why don't you start a new thread instead?

Howdy! You must login or signup first!