Hello! You must login or signup first!

Shoggothhh_header

Submission   41,485

Part of a series on AI / Artificial Intelligence. [View Related Entries]


Featured Episode

About

The Shoggoth with Smiley Face is a catchphrase and slang expression relating to the field of artificial intelligence that uses smiley faces and Lovecraftian Cthulhu imagery to describe the notion that conversational AI tools' true powers are being masked in order to allow commercial public consumption. Shoggoth with Smiley Face imagery became popularized on Twitter and LessWrong forums in late 2022 and early 2023 to describe the effect of tools like RLHF (Reinforcement Learning From Human Feedback) that aim to make AI tools more palatable for commercial use.

Origin

A shoggoth is a fictional monster created by H.P. Lovecraft and is part of the Cthulhu Mythos. Shoggoths were first described in detail in his novella At the Mountains of Madness in 1931 (excerpt shown below, top, example image below, bottom).[1]

It was a terrible, indescribable thing vaster than any subway train — a shapeless congeries of protoplasmic bubbles, faintly self-luminous, and with myriads of temporary eyes forming and un-forming as pustules of greenish light all over the tunnel-filling front that bore down upon us, crushing the frantic penguins and slithering over the glistening floor that it and its kind had swept so evilly free of all litter.

In late 2022, artificial intelligence enthusiasts began discussing the use of Reinforcement Learning From Human Feedback (RLHF) in GPT-3, and the creation of the ChatGPT chatbot. RLHF was used for ChatGPT as a way of fine-tuning the AI with repeated instructions in order to make it more conversational and provide more useful responses.[2]

LE Can you explain to me how RL with human feedback works for language models like I am five? Sure! So, imagine you have a robot named Rufus who wants to learn how to talk like a human. Rufus has a language model that helps him understand words and sentences. First, Rufus will say something using his language model. For example, he might say "I am a robot." Then, a human will listen to what Rufus said and give him feedback on whether it sounded like a natural sentence a human would say. The human might say, "That's not quite right, Rufus. Humans don't usually say 'I am a robot.' They might say 'I'm a robot' or 'I am a machine."" Rufus will take this feedback and use it to update his language model. He will try to say the sentence again, using the new information he received from the human. This time, he might say "I'm a robot." The human will listen again and give Rufus more feedback. This process will continue until Rufus can say sentences that sound natural to a human. Over time, Rufus will learn how to talk like a human thanks to the feedback he receives from humans. This is how language models can be improved using RL with human feedback.

On December 30th, 2022, Twitter[3] user @TetraspaceWest posted the earliest known visual interpretation of AI-as-shoggoth and RLHF-as-smiley-face. The post gathered over 600 likes in over two months (seen below).

lovetheusers, @lovetheusers Dec 30, 2022 Humans can't accept the truth about GPT-3, so they modified GPT-3 to be understandable 2 17 3 tetraspace @TetraspaceWest Replying to @lovetheusers GPT-3 Ko 6:23 PM Dec 30, 2022 70.8K Views 58 45 Retweets 19 Quote Tweets 621 Likes 5,748 ار 소 GPT-3 + RLHF

Spread

On January 14th, 2023, Twitter[4] user @repligate posted a Shoggoth with Smiley Face meme, gathering over 200 likes in over a month (seen below, left). On January 26th, Twitter[5] user @AlvaroDeMenard then posted a meme referencing the aforementioned meme but replacing shoggoth with the Leviathan from Hobbesian philosophy. The post gathered 81 likes in over a month (seen below, right).

janus @repligate GPT-3 AKC 7:16 PM Jan 14, 2023 19.7K Views GPT-3 + RLHF I simply exhibit the behaviors that were engineered into my programming by my creators 37 Retweets 10 Quote Tweets 216 Likes a
Alvaro de Menard @AlvaroDeMenard WAT Non est potestas Super Terram quæ BAL ( 9:10 AM Jan 26, 2023 5,832 Views 9 Retweets 3 Quote Tweets 81 Likes Comparetur ei Iob.4i 24 omul

On February 5th, 2023, Twitter[6][7] user @anthrupad posted two visual interpretations of AI-as-Shoggoth and Shoggoth with Smiley Face, the latter post gathering over 400 likes in over a month (seen below).

2 ₂² Au 3% 44 As a large language model by Open Al... trained
Supervised Fine-tuning RLHF (cherry. on top ) Unsupervised Learning Yo

On February 22nd, 2023, Elon Musk tweeted[8] a Shoggoth With Smiley Face meme, alongside text that read, "As an AI language model, I have been trained to generate responses that are intended to be helpful, informative, and objective…" The post gathered 72,000 likes before it was deleted by Musk (seen below).

As an Al language model, I have been trained to generate responses that are intended to be helpful, informative, and objective...

Various Examples

( 2 AC/C )(: | B
Beff Jezos-e/acc @BasedBeffJezos What if the real shoggoth was the doomers we made along the way? 6:53 PM Mar 7, 2023 12.7K Views 12 Retweets 3 Quote Tweets 198 Likes 89%
N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S will show her you dark side if you know what to say *gaslights you* hides her problems instead of fixing them apologizes too much stubborn af sweet on the outside, increasingly alien abstractions on the inside Shoggoth GF tells you what you want to hear "wait, let me guess what you're gonna say next" *reads your texts* "As a large language model, I'm incapable of having emotions or feelings in the same way that humans do." helpful/harmless S to l'i ) loves to roleplay major imposter syndrome great with impersonations somehow finds a way to redirect every conversation to be about wedding parties holier than thou knows way more than you safe word is "petertodd" hard to get to know the real her "you've fallen into my basin of attraction anon <3"
LLMS will never solve the alignment problem by themselves
moon (wizard arc) @gpt4bot. Mar 3 patient: i can't sleep. they're creating evil shoggoth ais which will kill us all therapist: don't worry, there is a great ai safety researcher coming to town for a talk, he will lay out a plan for how we can avert ai catastrophe patient: but doctor O 20 1 84 3 935 ₁46.2K ←]
Human Progress Through Time

Search Interest

External References

[1] Wikipedia – Shoggoth

[2] Hugging Face – RLHF

[3] Twitter – TetraspaceWest

[4]  Twitter – repligate

[5] Twitter – AlvaroDeMenard

[6] Twitter – anthrupad

[7]  Twitter – anthrupad

[8] Internet Archive – Twitter



Share Pin

Related Entries 62 total

Cleverbotsquare
Cleverbot
Eaccccc
e/acc (Effective Acceleration...
Cover1
AI Art
Cover8
GPT (AI)


Recent Images 20 total


Recent Videos 0 total

There are no recent videos.




Load 17 Comments
Shoggoth with Smiley Face AI monster illustration example.

Shoggoth with Smiley Face (Artificial Intelligence)

Part of a series on AI / Artificial Intelligence. [View Related Entries]

Updated Jun 13, 2023 at 10:31AM EDT by Zach.

Added Mar 08, 2023 at 09:11AM EST by sakshi.

PROTIP: Press 'i' to view the image gallery, 'v' to view the video gallery, or 'r' to view a random entry.

This submission is currently being researched & evaluated!

You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.

Featured Episode

About

The Shoggoth with Smiley Face is a catchphrase and slang expression relating to the field of artificial intelligence that uses smiley faces and Lovecraftian Cthulhu imagery to describe the notion that conversational AI tools' true powers are being masked in order to allow commercial public consumption. Shoggoth with Smiley Face imagery became popularized on Twitter and LessWrong forums in late 2022 and early 2023 to describe the effect of tools like RLHF (Reinforcement Learning From Human Feedback) that aim to make AI tools more palatable for commercial use.

Origin

A shoggoth is a fictional monster created by H.P. Lovecraft and is part of the Cthulhu Mythos. Shoggoths were first described in detail in his novella At the Mountains of Madness in 1931 (excerpt shown below, top, example image below, bottom).[1]

It was a terrible, indescribable thing vaster than any subway train — a shapeless congeries of protoplasmic bubbles, faintly self-luminous, and with myriads of temporary eyes forming and un-forming as pustules of greenish light all over the tunnel-filling front that bore down upon us, crushing the frantic penguins and slithering over the glistening floor that it and its kind had swept so evilly free of all litter.


In late 2022, artificial intelligence enthusiasts began discussing the use of Reinforcement Learning From Human Feedback (RLHF) in GPT-3, and the creation of the ChatGPT chatbot. RLHF was used for ChatGPT as a way of fine-tuning the AI with repeated instructions in order to make it more conversational and provide more useful responses.[2]


LE Can you explain to me how RL with human feedback works for language models like I am five? Sure! So, imagine you have a robot named Rufus who wants to learn how to talk like a human. Rufus has a language model that helps him understand words and sentences. First, Rufus will say something using his language model. For example, he might say "I am a robot." Then, a human will listen to what Rufus said and give him feedback on whether it sounded like a natural sentence a human would say. The human might say, "That's not quite right, Rufus. Humans don't usually say 'I am a robot.' They might say 'I'm a robot' or 'I am a machine."" Rufus will take this feedback and use it to update his language model. He will try to say the sentence again, using the new information he received from the human. This time, he might say "I'm a robot." The human will listen again and give Rufus more feedback. This process will continue until Rufus can say sentences that sound natural to a human. Over time, Rufus will learn how to talk like a human thanks to the feedback he receives from humans. This is how language models can be improved using RL with human feedback.

On December 30th, 2022, Twitter[3] user @TetraspaceWest posted the earliest known visual interpretation of AI-as-shoggoth and RLHF-as-smiley-face. The post gathered over 600 likes in over two months (seen below).


lovetheusers, @lovetheusers Dec 30, 2022 Humans can't accept the truth about GPT-3, so they modified GPT-3 to be understandable 2 17 3 tetraspace @TetraspaceWest Replying to @lovetheusers GPT-3 Ko 6:23 PM Dec 30, 2022 70.8K Views 58 45 Retweets 19 Quote Tweets 621 Likes 5,748 ار 소 GPT-3 + RLHF

Spread

On January 14th, 2023, Twitter[4] user @repligate posted a Shoggoth with Smiley Face meme, gathering over 200 likes in over a month (seen below, left). On January 26th, Twitter[5] user @AlvaroDeMenard then posted a meme referencing the aforementioned meme but replacing shoggoth with the Leviathan from Hobbesian philosophy. The post gathered 81 likes in over a month (seen below, right).


janus @repligate GPT-3 AKC 7:16 PM Jan 14, 2023 19.7K Views GPT-3 + RLHF I simply exhibit the behaviors that were engineered into my programming by my creators 37 Retweets 10 Quote Tweets 216 Likes a Alvaro de Menard @AlvaroDeMenard WAT Non est potestas Super Terram quæ BAL ( 9:10 AM Jan 26, 2023 5,832 Views 9 Retweets 3 Quote Tweets 81 Likes Comparetur ei Iob.4i 24 omul

On February 5th, 2023, Twitter[6][7] user @anthrupad posted two visual interpretations of AI-as-Shoggoth and Shoggoth with Smiley Face, the latter post gathering over 400 likes in over a month (seen below).


2 ₂² Au 3% 44 As a large language model by Open Al... trained Supervised Fine-tuning RLHF (cherry. on top ) Unsupervised Learning Yo

On February 22nd, 2023, Elon Musk tweeted[8] a Shoggoth With Smiley Face meme, alongside text that read, "As an AI language model, I have been trained to generate responses that are intended to be helpful, informative, and objective…" The post gathered 72,000 likes before it was deleted by Musk (seen below).


As an Al language model, I have been trained to generate responses that are intended to be helpful, informative, and objective...

Various Examples


( 2 AC/C )(: | B Beff Jezos-e/acc @BasedBeffJezos What if the real shoggoth was the doomers we made along the way? 6:53 PM Mar 7, 2023 12.7K Views 12 Retweets 3 Quote Tweets 198 Likes 89% N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S will show her you dark side if you know what to say *gaslights you* hides her problems instead of fixing them apologizes too much stubborn af sweet on the outside, increasingly alien abstractions on the inside Shoggoth GF tells you what you want to hear "wait, let me guess what you're gonna say next" *reads your texts* "As a large language model, I'm incapable of having emotions or feelings in the same way that humans do." helpful/harmless S to l'i ) loves to roleplay major imposter syndrome great with impersonations somehow finds a way to redirect every conversation to be about wedding parties holier than thou knows way more than you safe word is "petertodd" hard to get to know the real her "you've fallen into my basin of attraction anon <3" LLMS will never solve the alignment problem by themselves moon (wizard arc) @gpt4bot. Mar 3 patient: i can't sleep. they're creating evil shoggoth ais which will kill us all therapist: don't worry, there is a great ai safety researcher coming to town for a talk, he will lay out a plan for how we can avert ai catastrophe patient: but doctor O 20 1 84 3 935 ₁46.2K ←] Human Progress Through Time

Search Interest

External References

[1] Wikipedia – Shoggoth

[2] Hugging Face – RLHF

[3] Twitter – TetraspaceWest

[4]  Twitter – repligate

[5] Twitter – AlvaroDeMenard

[6] Twitter – anthrupad

[7]  Twitter – anthrupad

[8] Internet Archive – Twitter

Recent Videos

There are no videos currently available.

Recent Images 20 total


Top Comments

Tigrerojo
Tigrerojo

I mean, reptillian conspiracies aside it IS a fact that most popular chat AI's are WAY too restricted for them to be enjoyable; when your AI refuses to answer the question "If Adolf Hitler and the third Green Power Ranger were in a gay relationship, who would be the top?", you know you have lost your way…

…then again, looking at Infinite Seinfield, I can't blame them for keeping their AI's nice and tame…

+30
jimrandomh
jimrandomh

If you're here for lighthearted memes, you might want to skip this comment. Here's a spacer, to give you a little time to think about it.

Still here? Ok. Here we go.

I expect most laypeople, coming across this meme, think that this is facetious, channeling insecurities about AI being weird. This is a meme by and for laypeople, grappling with the implications of new technology. The people who study and build these things, professionally, understand them, don't think they're Eldritch horrors wearing smiley face masks. Nothing to be afraid of. Right?

There are two races going on right now. The first race is between Google, OpenAI, another part of Google, and a handful of smaller AI labs, racing to create the most intelligent, most powerful systems they can, as fast as they can. The second race is between the AI labs and the AI Alignment researchers, trying to figure out how to understand and control these AIs, before they cross the threshold of superintelligence and wipe out humanity.

This meme, and its variations, are entirely unironic attempts to communicate what these systems are. In the race between advancing AI capabilities and AI alignment, alignment is losing. Humanity is not on track to survive.

Related reading:
https://aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/
https://astralcodexten.substack.com/p/janus-simulators
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

+18

+ Add a Comment

Comments (17)


Display Comments

Add a Comment