Eliezer Yudkowsky making a funny face with the society if X meme template in the background.

Eliezer Yudkowsky

Part of a series on AI / Artificial Intelligences. [View Related Entries]
[View Related Sub-entries]

Updated Mar 22, 2023 at 05:30PM EDT by Zach.

Added Mar 22, 2023 at 03:27PM EDT by sakshi.

PROTIP: Press 'i' to view the image gallery, 'v' to view the video gallery, or 'r' to view a random entry.

This submission is currently being researched & evaluated!

You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.

About

Eliezer Yudkowsky is a researcher and writer known for his work in the field of artificial intelligence research. He is notable for popularizing the idea that AI should be taught to be benign and useful toward humans (called "artificial general intelligence" or "AGI") from the start in order to avoid a scenario in which they explode in intelligence and begin to pose a threat to humanity's well-being. Yudkowsky is also known for starting the Machine Intelligence Research Institute (MIRI) and the LessWrong internet forums.

Career

Eliezer Yudkowsky did not attend high school or college.[1] Between 1998 and 2000, he wrote "Notes on the Design of Self-Enhancing Intelligence,"[2] "Plan to Singularity"[3] and revised "Notes on the Design" into "Coding a Transhuman AI."[4] In the year 2000, the "Singularity Institute," now known as MIRI was created.[5] Yudkowsky ran the Singularity Institute alone until 2008, with all its publication bearing his name, and with him being listed as its sole employee.[6][7] Yudkowsky has also worked closely with Nick Bostrom, writer of the Paperclip Maximizer theory about AI Alignment.

On February 28th, 2010, Yudkowsky published a Harry Potter fanfiction to FanFiction.net.[9] The work was titled "Harry Potter and the Methods of Rationality" and followed a world in which the titular character was raised by a biochemist and grew up reading science fiction. The approximately 660,000-word-long book has been lauded by fans of both Harry Potter and Yudkowsky,[11] with the latter claiming the work to be his method of trying to teach rationality to the masses.[10]


Harry Potter AND THE METHODS OF RATIONALITY ELIEZER YUDKOWSKY

Online History

In 2008, Yudkowsky started the community blog LessWrong, a forum focused on discussions of philosophy, rationality, economics and artificial intelligence, among other topics. The blog is responsible for the genesis of Roko's Basilisk and contains extensive writing on the Waluigi Effect.

On May 5th, 2016, LessWrong founder Eliezer Yudkowsky gave a talk at Stanford University titled "AI Alignment: Why It's Hard and Where to Start." On December 28th, 2016, the Machine Intelligence Research Institute YouTube[8] channel uploaded a recording of the talk, which gathered more than 67,000 views over the next six years.



At the 1:11:21 timestamp on the aforementioned video, Yudkowsky says, "aaahh!" and makes a scared face while gesturing his hands by his head. A screen capture of this scene has since become a prevalent reaction image used in memes on Twitter amongst fans of his work.



Yudkowsky Screaming

On November 22nd, 2022, Twitter[12] user @SpacedOutMatt posted a tweet using what he called the yudkowsky_screaming.jpg, gathering over 290 likes in over four months (seen below, left). On November 28th, Twitter[13] user @goth600 posted a Yudkowsky Screaming meme as well, alongside a caption that read, "Mfw the shrimp start downloading the Diplomacy weights through Starlink through tor." The post gathered over 40 likes in over four months (seen below, right).


Matt @SpacedOutMatt This is a certified yudkowsky_screaming.jpg moment ENG Mike Lewis @ml_perception Nov 22, 2022 New paper in Science today on playing the classic negotiation game "Diplomacy" at a human level, by connecting language models with strategic reasoning! Our agent engages in intense and lengthy dialogues to persuade other players to follow its plans. This was really hard! 1/5 Show this thread Autumn 1902 Movement complete 12:07 PM . Nov 22, 2022 TR ENG: 17 FRA ITA: 4 GER 1 AUS 0 TUR: 6 US AUTUMN 1902 18 Retweets 2 Quotes 291 Likes 12 Bookmarks PRESS INFO ORDERS MY GAMES HELP - ALL FRA ITAGER AUS TUR RUS ENG: You might want to self bounce in Kiel? Autumn, 1902 03:20 PM GER: Yeah I was thinking of that Are you going for Bar? because Im pushsing for Lvn so next spring can support you to stp no problem! Autumn 1902 03:22 PM Send Message ENG: Nah, gonna try for Stp now Autumn 1992 03:22 PM GER: If not support in 1903 Autumn, 1902 03:22 PM ENG: Awesome, sounds good Autumn 1902 03:23 PM : @Shoggoth600 @goth600 Mfw the shrimp start downloading the Diplomacy weights through Starlink through tor 8:38 AM Nov 28, 2022

Various Examples


?????-?????- @deepfates Eliezer: *screaming* Greg Brockman: " So we'll copy paste this generated code and just run it. Oh, error message. So we'll just paste that error message back into the machine and it'll give us some new code. Then we just copy paste that=" Eliezer: *scream goes ultrasonic* 12:20 PM. Mar 15, 2023 218K Views 108 Retweets 10 Quotes 1,628 Likes 114 Bookmarks : new moon @gpt4bot patient: i can't sleep. they're creating evil shoggoth ais which will kill us all therapist: don't worry, there is a great ai safety researcher coming to town for a talk, he will lay out a plan for how we can avert ai catastrophe patient: but doctor 5:23 PM. Mar 3, 2023 47.6K Views : 83 Retweets 5 Quotes 949 Likes 29 Bookmarks @Shoggoth600 @goth600 tfw we accidentally meme Al gain of function research into a real thing 3:51 PM. Feb 28, 2023 11.6K Views 8 Retweets 2 Quotes 257 Likes 4 Bookmarks Roko.Eth @RokoMijic Colby @ZerothAxiom Mar 6 EY: "it's likely we're all going to die, unless I'm considerably wrong about something" Lex: "But maybe love is more powerful than an unaligned super intelligence" EY: . Lex: "what do you think is the purpose of life?" Show this thread 6:49 PM Mar 6, 2023 4,217 Views : This Tweet is from an account that no longer exists. Learn more Vyas @veehass Replying to @marysuewriter Rowling moves to London and writes HP. Yudkowsky is inspired and writes HPMOR, gets popular, and founds Less Wrong. A user on those forums creates Roko's Basilisk, a thought experiment that Grimes and Elon bond over when they first meet. 2:32 PM Mar 11, 2022 109 Retweets 8 Quotes 4,734 Likes 185 Bookmarks : Profoundlyyyy @profoundlyyyy Eliezer waking up to GPT-4 dropping 0:03 6,940 views 2:01 PM Mar 14, 2023 16.3K Views ...

Search Interest

Unavailable.

External References

[1] CNBC – 5 Minutes With Yudkowsky

[2] Internet Archive – Yudkowsky

[3] Internet Archive – Yudkowsky

[4]  Internet Archive – Yudkowsky

[5] Internet Archive – SingInst.Org

[6] New Yorker – Doomsday Invention

[7] Machine Intelligence Research Institute – All Publications

[8] YouTube – Machine Intelligence Research Institute

[9] FanFiction – Harry Potter and the Methods of Rationality

[10] Facebook – Eliezer Yudkowsky

[11] Internet Archive – Top 10 harry Potter Fanfictions

[12] Twitter – SpacedOutMatt

[13] Twitter – goth600

Recent Videos 1 total

Recent Images 12 total


Top Comment

dsaiyaman
dsaiyaman

in reply to polandgod75

Going to college doesn't necessarily make you an intellectual. There's plenty of idiots running around with a degree and I've had my fair share of teachers spewing dumb stuff in class.

In contrast, I've met people who barely finished high school that are capable of holding their own on debates around complex things. If you've got an internet connection, you've got access to all the research and scientific literature you might need to learn by yourself.

+3

+ Add a Comment

Comments (5)


Display Comments

Add a Comment


Hello! You must login or signup first!