Yo! You must login or signup first!

Rokosbasii

Submission   35,869

Part of a series on AI / Artificial Intelligence. [View Related Entries]


Featured Episode

About

Roko's Basilisk is a thought experiment based on the premise that a powerful artificial intelligence (AI) in the future would be driven to retroactively harm anyone who did not work to support or help create it in the past. It is referred to as a "basilisk" based on the premise that the AI might punish those who heard about the hypothesis but did nothing to help create the AI, similar to how the gaze of a mythical reptilian creature known as a "basilisk" in European bestiaries would cause death to those who looked directly into its eyes.

Origin

On July 23rd, 2010, LessWrong Forums user Roko proposed a thought experiment about a future artificial superintelligence that would destroy humans who did not support bringing it into existence.[2]

In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half. You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero.

In reaction to the post, LessWrong founder Eliezer Yudkowsky harshly criticized the thought experiment, referring to it as a "genuinely dangerous thought" that could potentially drive a future AI to actually act on it. Shortly after, Yudkowsky deleted Roko's post and banned discussion of the subject for five years.

Eliezer Yudkowsky 24 July 2010 05:35:38AM 2 points One might think that the possibility of CEV punishing people couldn't possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAl was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous. I don't usually talk like this, but I'm going to make an exception for this case. Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive to ACTUALLY BLACKMAIL YOU. If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAl. We probably also have the FAl take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (l'm not sure / know the sufficient detail.) You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends This post was STUPID. (For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)

Etymology

The term "basilisk" refers to a legendary reptilian creature capable of killing with its stare alone. In the short science fiction story "Blit" by author David Langford, a "basilisk" refers to information that is able to crash a human mind by triggering thoughts the brain is incapable of thinking.[9]

Spread

On July 17th, 2014, Slate[3] published an article about Roko's Basilisk titled "The Most Terrifying Thought Experiment of All Time."

On August 4th, Yudkowsky replied to a post on the /r/Futurology[4] subreddit about the thought experiment, in which he explained why he had such a harsh reaction to Roko's post:

"Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet."

On October 6th, 2015, and entry for the thought experiment was created on the LessWrong Wiki.[8]

On December 19th, the Countdown Central YouTube channel uploaded a video titled "10 Scariest Theories Known to Man," which included the Roco's Basilisk thought experiment (shown below).

In April 2018, an episode of the show Silicon Valley was broadcast, in which the character Gilfoyle mentions the Roko's Basilisk thought experiment (shown below).

Elon Musk and Grimes

On October 26th, 2015, Grimes released a music video for the song "Flesh Without Blood" (shown below). In an interview with the music news site Fuse,[5] Grimes revealed that the character she portrays in the video is named Roccoco Basilisk, who is "doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette."

On May 7th, 2018, the celebrity news site PageSix[6] reported that Elon Musk originally reached out to Grimes online after he planned to make a "Rococo Basilisk" joke, but discovered that Grimes had already thought of it several years prior. That day, Musk tweeted "Rococo basilisk" (shown below).[7]

Elon Musk @elonmusk Rococo basilisk
"We Appreciate Power"

On November 28th, 2018, Grimes released the music video for her track "We Appreciate Power," which contains lyrics referencing an artificial superintelligence (shown below). Within 48 hours, the video accumulated more than 400,000 views and 4,200 comments.

[Intro: HANA]
We appreciate power
We appreciate power
We appreciate power, power

[Chorus 1: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
Elevate the human race, putting makeup on my face
We appreciate power
We appreciate power, power

[Verse 1: Grimes]
Simulation, give me something good
God's creation, so misunderstood
Pray to the divinity, the keeper of the key
One day everyone will believe

[Chorus 2: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
When will the state agree to cooperate?
We appreciate power
We appreciate power, power

[Verse 2: Grimes]
People like to say that we're insane
But AI will reward us when it reigns
Pledge allegiance to the world's most powerful computer
Simulation: it's the future

[Chorus 1: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
Elevate the human race, putting makeup on my face
We appreciate power
We appreciate power, power

[Bridge: Grimes & HANA]
And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up on a drive
And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up, backed up on a drive

[Chorus 1: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
Elevate the human race, putting makeup on my face
We appreciate power
We appreciate power

[Chorus 2: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
When will the state agree to cooperate?
We appreciate power
We appreciate power

[Post-Chorus: HANA]
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power

[Outro: Grimes]
Neanderthal to human being
Evolution, kill the gene
Biology is superficial
Intelligence is artificial
Submit
Submit
Submit
Submit
Submit
Submit
Submit
Submit

Search Interest

External References



Share Pin

Related Entries 64 total

Cleverbotsquare
Cleverbot
Eaccccc
e/acc (Effective Acceleration...
Cover1
AI Art
Cover8
GPT (AI)


Recent Images 28 total


Recent Videos 8 total




Load 56 Comments
Reptilian creature with glowing eyes surrounded by metatron cubes

Roko's Basilisk

Part of a series on AI / Artificial Intelligence. [View Related Entries]

Updated Feb 19, 2023 at 01:31PM EST by Don.

Added Nov 30, 2018 at 02:24PM EST by Don.

PROTIP: Press 'i' to view the image gallery, 'v' to view the video gallery, or 'r' to view a random entry.

This submission is currently being researched & evaluated!

You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.

Featured Episode

About

Roko's Basilisk is a thought experiment based on the premise that a powerful artificial intelligence (AI) in the future would be driven to retroactively harm anyone who did not work to support or help create it in the past. It is referred to as a "basilisk" based on the premise that the AI might punish those who heard about the hypothesis but did nothing to help create the AI, similar to how the gaze of a mythical reptilian creature known as a "basilisk" in European bestiaries would cause death to those who looked directly into its eyes.

Origin

On July 23rd, 2010, LessWrong Forums user Roko proposed a thought experiment about a future artificial superintelligence that would destroy humans who did not support bringing it into existence.[2]


In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half. You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero.

In reaction to the post, LessWrong founder Eliezer Yudkowsky harshly criticized the thought experiment, referring to it as a "genuinely dangerous thought" that could potentially drive a future AI to actually act on it. Shortly after, Yudkowsky deleted Roko's post and banned discussion of the subject for five years.


Eliezer Yudkowsky 24 July 2010 05:35:38AM 2 points One might think that the possibility of CEV punishing people couldn't possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAl was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous. I don't usually talk like this, but I'm going to make an exception for this case. Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive to ACTUALLY BLACKMAIL YOU. If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAl. We probably also have the FAl take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (l'm not sure / know the sufficient detail.) You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends This post was STUPID. (For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)

Etymology

The term "basilisk" refers to a legendary reptilian creature capable of killing with its stare alone. In the short science fiction story "Blit" by author David Langford, a "basilisk" refers to information that is able to crash a human mind by triggering thoughts the brain is incapable of thinking.[9]

Spread

On July 17th, 2014, Slate[3] published an article about Roko's Basilisk titled "The Most Terrifying Thought Experiment of All Time."

On August 4th, Yudkowsky replied to a post on the /r/Futurology[4] subreddit about the thought experiment, in which he explained why he had such a harsh reaction to Roko's post:

"Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet."

On October 6th, 2015, and entry for the thought experiment was created on the LessWrong Wiki.[8]

On December 19th, the Countdown Central YouTube channel uploaded a video titled "10 Scariest Theories Known to Man," which included the Roco's Basilisk thought experiment (shown below).



In April 2018, an episode of the show Silicon Valley was broadcast, in which the character Gilfoyle mentions the Roko's Basilisk thought experiment (shown below).



Elon Musk and Grimes

On October 26th, 2015, Grimes released a music video for the song "Flesh Without Blood" (shown below). In an interview with the music news site Fuse,[5] Grimes revealed that the character she portrays in the video is named Roccoco Basilisk, who is "doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette."



On May 7th, 2018, the celebrity news site PageSix[6] reported that Elon Musk originally reached out to Grimes online after he planned to make a "Rococo Basilisk" joke, but discovered that Grimes had already thought of it several years prior. That day, Musk tweeted "Rococo basilisk" (shown below).[7]


Elon Musk @elonmusk Rococo basilisk

"We Appreciate Power"

On November 28th, 2018, Grimes released the music video for her track "We Appreciate Power," which contains lyrics referencing an artificial superintelligence (shown below). Within 48 hours, the video accumulated more than 400,000 views and 4,200 comments.




[Intro: HANA]
We appreciate power
We appreciate power
We appreciate power, power

[Chorus 1: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
Elevate the human race, putting makeup on my face
We appreciate power
We appreciate power, power

[Verse 1: Grimes]
Simulation, give me something good
God's creation, so misunderstood
Pray to the divinity, the keeper of the key
One day everyone will believe

[Chorus 2: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
When will the state agree to cooperate?
We appreciate power
We appreciate power, power

[Verse 2: Grimes]
People like to say that we're insane
But AI will reward us when it reigns
Pledge allegiance to the world's most powerful computer
Simulation: it's the future

[Chorus 1: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
Elevate the human race, putting makeup on my face
We appreciate power
We appreciate power, power

[Bridge: Grimes & HANA]
And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up on a drive
And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up, backed up on a drive

[Chorus 1: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
Elevate the human race, putting makeup on my face
We appreciate power
We appreciate power

[Chorus 2: HANA]
What will it take to make you capitulate?
We appreciate power
We appreciate power
When will the state agree to cooperate?
We appreciate power
We appreciate power

[Post-Chorus: HANA]
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power
We appreciate power

[Outro: Grimes]
Neanderthal to human being
Evolution, kill the gene
Biology is superficial
Intelligence is artificial
Submit
Submit
Submit
Submit
Submit
Submit
Submit
Submit

Search Interest

External References

Recent Videos 8 total

Recent Images 28 total


Top Comments

GrayVBoat
GrayVBoat

Even if the Basilisk could rip you directly out of the past and torture you in-person for not supporting it, doing so would be an amoral act that ultimately works against the betterment of mankind. In the absence of time-travel, though, threatening torture isn't even guaranteed to help speed up the Basilisk's development and construction – and that's assuming the Basilisk ever exists in the first place. Regardless of morality, it'd would be a waste of time and resources that could otherwise be spent directly on its stated goal. And that's assuming everyone who hears about the Basilisk can even wrap their head around it, let alone believe it's possible or plausible. And of course, such a machine could break down or go rogue, thus rendering all that torture utterly useless.
This is all without discussing the center of it all which is itself utterly convoluted. This stupid "thought experiment" is interesting at a glance, but when you really start to pick it apart, it's full of holes and bad assumptions.

+21

+ Add a Comment

Comments (56)


Display Comments

Add a Comment