Forums / Discussion / Serious Debate

14,150 total conversations in 684 threads

+ New Thread


A.I. rights?

Last posted Sep 16, 2015 at 08:01PM EDT. Added Sep 15, 2015 at 04:05PM EDT
57 posts from 17 users

Ryumaru Borike wrote:

@cb5 It's not "feel threatening" it's the possible consequences of giving free will to a vastly superior intelligence and hoping it decides to be kind that's the problem.

@Basillus "In all Networks powerful enough to hold it you mean?" Even then, a Computer Virus could kill it if it's uploaded on all Networks.

Isn't the idea of A.I. taking over and controlling us instead just as worrying as them killing all of us?

not really. If that AI eliminates war and gives everyone a fair and just chance at life and being prosperous then I'd argue the AI would be the best leader. Since AIs wouldn't be greedy and have some form of corruption. It wouldn't be bribed under the table like a politician.

A computer virus would only be able to kill it if it didn't adapt. an AI like you said could change it's programming. you would need a Virus that is an AI to kill an AI.

If an AI wanted us all dead then everyone suffers.

Basilius wrote:

not really. If that AI eliminates war and gives everyone a fair and just chance at life and being prosperous then I'd argue the AI would be the best leader. Since AIs wouldn't be greedy and have some form of corruption. It wouldn't be bribed under the table like a politician.

A computer virus would only be able to kill it if it didn't adapt. an AI like you said could change it's programming. you would need a Virus that is an AI to kill an AI.

If an AI wanted us all dead then everyone suffers.

If A.I.'s have free will and emotion, why _wouldn't_ they become greedy or corrupt? In fact, why would they care for humanity at all? Why would they eliminate war (which can't be done without eliminated free will mind you)? Why would they care for human plight?

@Ryumaru
Cause greed and such are human concepts based upon human physical sensations.

Not to sound like a utter dick here, but I get the feeling you're arguing from a basis of watching too many scifi films.

cb5 wrote:

@Ryumaru
Cause greed and such are human concepts based upon human physical sensations.

Not to sound like a utter dick here, but I get the feeling you're arguing from a basis of watching too many scifi films.

Which, if they make A.I. with human emotions, would carry over.

I'm not saying it will for certain happen, but there is a definite risk, and it's not Sci-fi, but scientists worries that got me worried It isn't some fantastical situation that can only happen in the Summer Blockbuster, it is a tangible risk that has to be taken into account when making A.I. Just because a writer thought of it first and it got popular does not mean it's pure fantasy. I find the idea that a thinking being with free will will never turn on us to be more fantastical if you ask me.

@Ryumaru
Well you sure skimmed the hell out of what Stephen Hawking was talking about. 1)Stephen Hawking is a physicist not a programmer 2)What they're worried about is militarization of AI and such. Basically they're not saying that we should ban AI development, but rather the military shouldn't make a actual Skynet.

The problem is that the current people in charge of the usa military aren't fucking listening to actual experts in the field of AI development and they want to actually make Skynet and pretty much everyone in the field of AI development is telling them that's a bad idea and the military isn't listening to anyone.

Basically what the push is actually about:
Military, "Skynet is a wonderful idea!"
Pretty much every scientist, "Skynet is a horrible idea!"

It's not a push to stop development of AI or even remotely saying that we should stop working towards it; rather the push is against people using AI for military purposes.

cb5 wrote:

@Ryumaru
Well you sure skimmed the hell out of what Stephen Hawking was talking about. 1)Stephen Hawking is a physicist not a programmer 2)What they're worried about is militarization of AI and such. Basically they're not saying that we should ban AI development, but rather the military shouldn't make a actual Skynet.

The problem is that the current people in charge of the usa military aren't fucking listening to actual experts in the field of AI development and they want to actually make Skynet and pretty much everyone in the field of AI development is telling them that's a bad idea and the military isn't listening to anyone.

Basically what the push is actually about:
Military, "Skynet is a wonderful idea!"
Pretty much every scientist, "Skynet is a horrible idea!"

It's not a push to stop development of AI or even remotely saying that we should stop working towards it; rather the push is against people using AI for military purposes.

I'm not saying we should ban A.I. development either, just not give them the ability to act with human command. That's just one concern among many.

Ryumaru Borike wrote:

I'm not saying we should ban A.I. development either, just not give them the ability to act with human command. That's just one concern among many.

my only limit would be giving them control of military weapons. An AI that co-ordinates military operations around the world would be fine I think.

Skeletor-sm

This thread is closed to new posts.

Old threads normally auto-close after 30 days of inactivity.

Why don't you start a new thread instead?

Yo Yo! You must login or signup first!