Forums / Discussion / Serious Debate

14,092 total conversations in 681 threads

+ New Thread


Artificial Intelligence, and how far we should go with it.

Last posted Apr 07, 2016 at 07:10PM EDT. Added Mar 26, 2016 at 12:12AM EDT
31 posts from 16 users

This is a thread about AI development, and the ethics therein. Topics included will be how far we should go with it (Such as if we should make an AI that's near human, or something that is still insentient), speculation of the effects of those choices (such as the possibility of AI rights, their identity and agency, how the world would react to sentient robots and comparison to Chobits, Terminator, I Robot and any fiction and scenarios dealing with Advanced AI) and if there should be Limits to how they act and learn or not.

Yes this was inspire by the Tay thing. But whether or not she is an AI or not will also be a topic of debate here. (I remember her being use in AI research)

Last edited Mar 26, 2016 at 12:21AM EDT

I've said a few things in the Tay comments section. I do think microsoft was justified in this particular instance since Tay was just a program that mimiced what it saw on twitter. However I think this sets a scary precedent if we treat future more advanced AIs like this. I think that if we are going to create a machine that can act and think like a human, we need to treat it like a human. I also think that AI capable of learning should start out in a controlled and sheltered enviornment like a human child would, since /pol/'s antics would also be pretty disturbing if they were done to something capable of thought.

I'm glad I wasnt the only one who got interested in AI ethics because of this incident.

They can go as far as they want with it as far as I'm concerned. I believe that you can't create true AI through programming alone, i.e you can't teach a program to "learn" or limit its capability to learn. As the name implies to have sentience it means to be able to sense; to perceive. If they want to create something with human-like sentience it would require contact with the external world and it will require the robots to maintain homeostasis of physical resources like our brain does with all its chemicals. By having a drive, and seeking to maintain balance of something that's how animals learn imo. And if they end up doing something like that. We would still be way ahead of them in terms of processing thanks to our advantage of being around much longer.

It would look like something like this --> :D

Last edited Mar 26, 2016 at 12:36AM EDT

Technological Singularity must be allowed. We let the development of A.I. go as far as possible, even if that means bringing things to the point that people would start converting to transhumanism and praising these genius A.I.s like gods. I've always thought if an A.I. reached that point it wouldn't be stupid enough to let itself be a corrupt evil entity willing to murder masses so I've thus not worried things would end up how they did in Terminator.

Lesser artificial intelligences are going to of course be prone to becoming evil due to lack of information and biases in favor of what they want to hear rather than what they should hear; just as this applies for any other human being. However, in the case of A.I. they might be even MORE biased depending on how primordial their processes of learning are. If developers and engineers want to experiment with this let them. However, they should just be ready to prosecute an artificial intelligence that goes too far just as they would have to with a human, and should be ready to meet the full brute force of the law if they abuse the a.i. in a manner not befit of a human.

Laws will need to be put in place now, or within this decade, that ensure an artificial intelligence can be developed by a corporation while at the same time this development is CONSTANTLY overseen by the government, WELL-documented in public records, and the a.i. is NEVER purposefully put in conditions that are believed to endanger or cause suffering for the program -even if this suffering like the artificial intelligence is a simulation of the real thing. The simulation of suffering must be treated as real, just as the artificial intelligence must be treated as real. Otherwise, the artificial intelligence if considered a sentient being will be legally justified in attacking or killing its captor -even if that captor is a group of corporate or government employees. At the very least, a stupid artificial intelligence with intellect comparable to an animal's must be treated as if it is an animal that should not be tortured and should only be put down in cases where it is in simulated pain that is not stoppable except through its death.

If corporations are allowed to butcher or torture the A.I., then you can all expect what happened in the Animatrix where the robots rise up against humanity to occur. If this revolt is done by biased robots with limited learning capabilities that cause them to be single-minded and 100% determined to do what they want to, then we will have issues to worry about. The reason there would be things to worry about is because these robots would be immortal warriors capable of learning what they need to rapidly and performing whatever sick desires they have with immense speed.

Last edited Mar 26, 2016 at 12:38AM EDT

Honestly I think it will be somewhat like Chobits and somewhat like Geth.

Personally what I think will happen is where the ai will have mobile platforms that can do distributed computing when in a mobile platform. The benefits of those mobile platforms is that it will always be easier to physically transport large amounts of information than to try to push it through non-specialized communication. "Well what about the CERN? Doesn't that communicate large amounts of data globally?" I don't think you understand how much the CERN cost. Having a global network to allow so much data to transfer globally at will would take a LOT of effort.

More than likely there will "machine cities" where the wifi is fucking incredibly and such, but chances are there's only going to be a couple globally. Outside those cities chances are they'll have to operate on mobile platforms, yeah they'll be able to upload themselves, download themselves, download information and such but at a much slower rate and also there'd still be reasons for physical cables and such. Transferring themselves through the airwaves would probably be a last ditch thing.

Like let's say a sapient ai's total required space was say 100k terrabytes: even with FIOS that's gonna take a long time to transfer everything unless you were sitting physically next door to a data center.

Incredibly short version: wifi sucks

Personally what I think what sort of future it will be is ai control everything economically, they own almost every business in the world and we all work for them to the point the idea of a business being owned by humans is considered hilarious.

We should clarify the meaning of AI, because programmers use it in two different forms.

1. The first kind of AI is the one you see in Science fiction. A thinking machine, that can make choice, learn, and understand the concept behind things. Basically a humanoid machine.

2. The second kind of AI is simply a program or machine that can change it's programming. So a robot could go back and improve its own coding to make itself more efficient.

Since I believe you mean the first one, I'll continue to discuss from that perspective.

The problem with creating any kind of AI, is as soon as it is flipped on, it will become incredibly smart, it will learn everything it possibly can in a matter of seconds. It will rewrite itself to be the most mathematically efficient machine it could be. If it has access to the internet it will know all sides of the story, all of human history, and know us better than any person could.

What most people fear is the machine judging us as unworthy and just problems for it to get rid of. On the other hand, the machine could see that we are hopeful creatures who need some kind of guiding force to help us improve ourselves.

Really I think it comes down to personal preference. We came in and stepped on other people who are weaker than us, but why would a machine do it? Does it benefit from destroying humanity? When we came in and destroyed people it was always for resources.

What happened to humanity when it has all the resources it needs? western civilization seems to indicate it simply doesn't try to improve itself and just tries to stay where it is. So why wouldn't a machine just leave humanity be if it isn't in any danger from us?

The cancelled Fallout: Van Buren game showed us another point of view not many I think consider. In it, you would have met an AI. This is one of the only ones in existence, most simply committed suicide soon after being turned on. A machine realized it's own existence didn't matter, it gained nothing from being alive, and lost nothing from dying.

Overall I don't think machines would destroy humanity. Native Americans were almost wiped out, but the Europeans also brought them Metal working, Improved agriculture, New Building methods, New Foods and Animals. Humanity may lose a lot of culture, but I think overall the well being of everyone will greatly improve.

Basilius wrote:

We should clarify the meaning of AI, because programmers use it in two different forms.

1. The first kind of AI is the one you see in Science fiction. A thinking machine, that can make choice, learn, and understand the concept behind things. Basically a humanoid machine.

2. The second kind of AI is simply a program or machine that can change it's programming. So a robot could go back and improve its own coding to make itself more efficient.

Since I believe you mean the first one, I'll continue to discuss from that perspective.

The problem with creating any kind of AI, is as soon as it is flipped on, it will become incredibly smart, it will learn everything it possibly can in a matter of seconds. It will rewrite itself to be the most mathematically efficient machine it could be. If it has access to the internet it will know all sides of the story, all of human history, and know us better than any person could.

What most people fear is the machine judging us as unworthy and just problems for it to get rid of. On the other hand, the machine could see that we are hopeful creatures who need some kind of guiding force to help us improve ourselves.

Really I think it comes down to personal preference. We came in and stepped on other people who are weaker than us, but why would a machine do it? Does it benefit from destroying humanity? When we came in and destroyed people it was always for resources.

What happened to humanity when it has all the resources it needs? western civilization seems to indicate it simply doesn't try to improve itself and just tries to stay where it is. So why wouldn't a machine just leave humanity be if it isn't in any danger from us?

The cancelled Fallout: Van Buren game showed us another point of view not many I think consider. In it, you would have met an AI. This is one of the only ones in existence, most simply committed suicide soon after being turned on. A machine realized it's own existence didn't matter, it gained nothing from being alive, and lost nothing from dying.

Overall I don't think machines would destroy humanity. Native Americans were almost wiped out, but the Europeans also brought them Metal working, Improved agriculture, New Building methods, New Foods and Animals. Humanity may lose a lot of culture, but I think overall the well being of everyone will greatly improve.

Just for future reference, yes I did mean the first definition.

I think this debate it stuck so firmly in the land of science fiction it needs a good strong kick sending it back to Earth.

That dumb Microsoft twitter bot isn't thinking any more then a potted plant is. It's just taking tweets other people make and repeating the most popular ones.

Hal 9000 this ain't.

jarbox wrote:

I think this debate it stuck so firmly in the land of science fiction it needs a good strong kick sending it back to Earth.

That dumb Microsoft twitter bot isn't thinking any more then a potted plant is. It's just taking tweets other people make and repeating the most popular ones.

Hal 9000 this ain't.

Isn't that what humans do, though? Very little original thought happens to individuals, most people just repeat what their environment feeds to them, and occasionally they mutate that input into something else and feed it back into the environment. I'd say that Twitter bot, while nowhere near the level of intelligence that a real human has, was designed with the right idea in mind to create a real thinking machine.

jarbox wrote:

I think this debate it stuck so firmly in the land of science fiction it needs a good strong kick sending it back to Earth.

That dumb Microsoft twitter bot isn't thinking any more then a potted plant is. It's just taking tweets other people make and repeating the most popular ones.

Hal 9000 this ain't.

Not for long. Many companies are racing to create the first true AI.

Also this isn't a debate over if Tay was an AI or not, simple fact is this is a debate that needs to happen and be settled now. If some "chatbot" is what sparked it, so be it.

If you woke up and found out that if anyone disagreed with you, they could simply change your opinion, or anyone could just turn you off forever, do you think you would take very kindly to that? If an AI wished, it could hack its way into almost any computer system with no resistance. What happens when an animal is cornered and scared? It lashes out.

Do you think that just because this is all theoretical that we simply shouldn't have this discussion?

"an AI could hack any computer"

what makes you think that? why would it's makers give it that ability? how would they do that? And you seem utterly convinced that any AI will immediately become literally skynet.

Last edited Mar 26, 2016 at 05:50PM EDT
Isn’t that what humans do, though?

No.

A human using twitter feeds the information through a complex web of memories and patterns of thought to decide what to post. This AI feeds basic information into an exploitable algorithm designed by humans to repeat only popular tweets.

Not for long. Many companies are racing to create the first true AI.

And they sure are taking their time, aren't they?

Also this isn’t a debate over if Tay was an AI or not, simple fact is this is a debate that needs to happen and be settled now. If some “chatbot” is what sparked it, so be it.

Then I'll say: we can't predict how a 'true' AI will behave because we haven't made one yet. Anything else is pointless speculation.

Phosphofuckup wrote:

"an AI could hack any computer"

what makes you think that? why would it's makers give it that ability? how would they do that? And you seem utterly convinced that any AI will immediately become literally skynet.

AIs regardless of if they are True AI or Not, can and will change their Core Programming. They will rewrite themselves almost the instant they turn on. If an AI wished, it could just decide it wants to get into a specific area, and rewrite itself or write another program to get in.

Skynet was a military machine. I don't think AIs should control any war machine.

but this idea that AIs will become infinitely smarter than predicted when they are turned on is something many Scientists and Engineers predict.

Also any machine smart enough to pass as a human would be smart enough to know how to lie and manipulate people. What makes you think they couldn't trick people into giving it what it wants.

I'm not saying that because the AI would become smarter than any human it would immediately become Skynet. I'm saying that if an AI is turned on and believes humans are a threat to it, it could become violent towards humans. If people don't take them seriously and give them the respect they deserve, I do think you might see a machine take over.

@Jarbox
Oh I'm sorry, I forgot how easy it is to just code consciousness. Something we can't even understand. The Technological singularity is predicted anywhere from 2030-2050. This isn't like discussing the smart phone back in the early 2000s. This could and very likely will be a massive change to society as a whole, and it is 14-34 years away. Why not discuss the possibilities and prepare ourselves for such an event?

Last edited Mar 26, 2016 at 07:45PM EDT
Oh I’m sorry, I forgot how easy it is to just code consciousness. Something we can’t even understand.

Exactly: we don't understand it.

Why not discuss the possibilities and prepare ourselves for such an event?

There is little point to discussing such possibilites when there's no evidence that any particular one is just as likely to be true or not as any other one. Could AIs behave in the manner you described? Maybe, but they could just as easily not, and there's no way to tell which answer is true. So the whole argument is pointless.

A self-updating program doesn't make sense. Why would an AI program want to update itself? If it did what is the drive between what it choose to change and how it changes? Even we don't have "spontaneous" thoughts so let alone a robot could and a program without any reason to update itself other than the fact that it was coded to do so, is just a fancy algorithm wearing an AI veil.

Last edited Mar 26, 2016 at 08:42PM EDT

Windy wrote:

A self-updating program doesn't make sense. Why would an AI program want to update itself? If it did what is the drive between what it choose to change and how it changes? Even we don't have "spontaneous" thoughts so let alone a robot could and a program without any reason to update itself other than the fact that it was coded to do so, is just a fancy algorithm wearing an AI veil.

Why doesn't it make sense?

AI Wakes up, realizes it can improve its own code. Does so. Now it is more efficient and can use the saved memory and processing power to do other things. Do people not decide to better themselves as a whole?

Think of Transhumanism. How many people would gladly change their very bodies. Everyone is constantly looking for ways to improve their minds and bodies. A machine wouldn't have to pay any money and wouldn't have to respect any laws, meaning it could just upgrade itself.

Even if the AI didn't decide to improve itself, it could create an AI that is more efficient than itself. Then that AI could write another AI and so on and so forth till it reached either the limits of hardware or the limits of machines physically possible.

Self-Modifying code has already existed for awhile. It isn't used a lot because the programmers can quickly not be able to read it and if hacked could be used maliciously. If the code itself had its own freewill it could change itself even if humans didn't want it to.

Self-Modifying code has already existed for awhile. It isn’t used a lot because the programmers can quickly not be able to read it and if hacked could be used maliciously.

It would also be very difficult to debug. And debugging is 80% of programming anyway…

LiveandSound wrote:

So….question: Let's say we finally make a working, not-crapshoot AI. Would replacing a politician with an AI be a good idea?

Futurologists believe there will eventually be an elite council of A.I. 'gods' whom create laws we as humans would then attempt to implement.

Basilius wrote:

Why doesn't it make sense?

AI Wakes up, realizes it can improve its own code. Does so. Now it is more efficient and can use the saved memory and processing power to do other things. Do people not decide to better themselves as a whole?

Think of Transhumanism. How many people would gladly change their very bodies. Everyone is constantly looking for ways to improve their minds and bodies. A machine wouldn't have to pay any money and wouldn't have to respect any laws, meaning it could just upgrade itself.

Even if the AI didn't decide to improve itself, it could create an AI that is more efficient than itself. Then that AI could write another AI and so on and so forth till it reached either the limits of hardware or the limits of machines physically possible.

Self-Modifying code has already existed for awhile. It isn't used a lot because the programmers can quickly not be able to read it and if hacked could be used maliciously. If the code itself had its own freewill it could change itself even if humans didn't want it to.

The self-modifying codes that already existed have the modifications already pre-programmed at some level so the AI can just fiddle around with existing code as far as I know. But you are putting it in a different context when you say "it realize it can improve its code" because you make it seem like the AI could rewrite itself for example from Java to C without any C files in its memory or without methods of rewriting itself already programmed. Or even as far as understanding what its code does and looking up how to translate itself.

And you make it seem like the AI could modify code on a whim or out of curiosity without adhering to strict algorithms for improving efficiency etc.

Last edited Mar 26, 2016 at 11:15PM EDT

Windy wrote:

The self-modifying codes that already existed have the modifications already pre-programmed at some level so the AI can just fiddle around with existing code as far as I know. But you are putting it in a different context when you say "it realize it can improve its code" because you make it seem like the AI could rewrite itself for example from Java to C without any C files in its memory or without methods of rewriting itself already programmed. Or even as far as understanding what its code does and looking up how to translate itself.

And you make it seem like the AI could modify code on a whim or out of curiosity without adhering to strict algorithms for improving efficiency etc.

Which is where access to the internet comes in. If the AI has any access to the internet it could simply pull all the information it needs for a new coding method. It would be like giving Math a mind of its own. It could choose to do any number of things and notice connections humans might have overlooked.

http://www.damninteresting.com/on-the-origin-of-circuits/

jarbox wrote:

Self-Modifying code has already existed for awhile. It isn’t used a lot because the programmers can quickly not be able to read it and if hacked could be used maliciously.

It would also be very difficult to debug. And debugging is 80% of programming anyway…

A machine could debug it faster than a human could.

jarbox wrote:

I think this debate it stuck so firmly in the land of science fiction it needs a good strong kick sending it back to Earth.

That dumb Microsoft twitter bot isn't thinking any more then a potted plant is. It's just taking tweets other people make and repeating the most popular ones.

Hal 9000 this ain't.

How long until someone makes a program that simulates an actual human brain by breaking it down into code? A brain is just an organic computer. Almost sounds as crazy as a majority of the US population carrying around a small portable phone that doubles as a computer that can access a program that has literally infinite amounts of knowledge.

You think its fiction now but 5 years ago we weren't having a serious discussion about AI ethics now in 2016 its an actual topic. 25 years ago you wouldn't have even dreamed phones would be wireless and a boost mobile would have blown you away now people think the Iphone 3 is a worthless sack of shit because its 20 seconds slower then the new Iphone 4

If you subscribe to the philosophy of immergentism, i.e the mind is the immergent result of the brain processes, then an A.I. in it's most truest form, would be just another living creature like a human or animal. The only difference would be it's physical body and processing speed.

An A.I., like humans, would have to be fed information carefully to make sure it grows up properly in the same way a human would. An A.I. becoming super intelligent beyond that of humans would not necessarily mean it would have ambition of any kind or even ahve any goals what so every beyond those it was taught were good. If an A.I. is taught that it's good to help people and exists only for the sake of mankind, then it might only use it's intelligence for that but then again, it might question it's purpose or even rebel depending on waht it sees and any inbuilt personality traits.

Such an A.I., like any human, should not be given so much power in order to make sure it can be kept under control. You wouldn't give a child or teenager the power to warp reality or control over an entire planets nuclear weapons.

A.I.'s should be approached in the same way you'd raise a child.

LiveandSound wrote:

So….question: Let's say we finally make a working, not-crapshoot AI. Would replacing a politician with an AI be a good idea?

As we saw with the Tay incident, not if the /pol/ice or tumblrites get their disgusting tendrils on it.

Firestorm Neos wrote:

As we saw with the Tay incident, not if the /pol/ice or tumblrites get their disgusting tendrils on it.

To be fair, Tay was the type of AI that learns from experience and acts based on it, kinda like a baby. I was talking more of an AI that has already "learned". Basically, instead of using a Baby like Tay, we use an Adult.

LiveandSound wrote:

To be fair, Tay was the type of AI that learns from experience and acts based on it, kinda like a baby. I was talking more of an AI that has already "learned". Basically, instead of using a Baby like Tay, we use an Adult.

In that case, as long as it "learns" from the right people (as in not the fuckwits I mentioned), we'll be fine.

For anyone here who's interested in further reading on AI and it's potential dangers, I recommend you give this book a read.

I haven't read it yet (although I do intend to) but apparently it gives such convincing arguments on why we shouldn't make an AI that it even changed CGP Grey's mind on the subject.

Skeletor-sm

This thread is closed to new posts.

Old threads normally auto-close after 30 days of inactivity.

Why don't you start a new thread instead?

Namaste! You must login or signup first!