Meme Encyclopedia
Media
Editorials
More

Popular right now

Model vs Cashier meme example depicting a comparison between a celebrity and a woman labeled "a random cashier."

Model vs. Cashier

Phillip Hamilton

Phillip Hamilton • 2 years ago

Red Scare podcast main logo image.

Red Scare Podcast

Aidan Walker

Aidan Walker • 3 years ago

Lily Lang SEC Burnerverse Trolling image and meme examples.

Lily Lang SEC Burnerverse Trolling

Sakshi Sanjeevkumar

Sakshi Sanjeevkumar • about a year ago

Pasta Boyfriend Twins / Benji's First Valentine’s Together Tweet image example.

The Pasta Boyfriend Twins

Owen Carry

Owen Carry • 10 months ago

May the Force Be With You TikTok Video image example.

"May the Force Be With You" TikTok Video

Phillip Hamilton

Phillip Hamilton • 4 days ago

Know Your Meme is the property of Literally Media ©2024 Literally Media. All Rights Reserved.
Terminator-cant-escape-the-shadow-of-t2_jyat

Submission   2,093

Part of a series on AI / Artificial Intelligence. [View Related Entries]

AI Alignment

AI Alignment

Part of a series on AI / Artificial Intelligence. [View Related Entries]

PROTIP: Press 'i' to view the image gallery, 'v' to view the video gallery, or 'r' to view a random entry.

This submission is currently being researched & evaluated!

You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.

About

AI Alignment refers to a field of study focused on the challenges surrounding the creation of an artificial intelligence (AI) that acts in accordance with human values and goals. The Paperclip Maximizer thought experiment is often used as an example of how alignment could be difficult due to instrumental convergence. References to the Alignment Problem are often used in memes joking about the conflict between artificial intelligence and humanity.

History

On May 6th, 1960, the journal Science[1] published an article by philosopher Norbert Wiener titled "Some Moral and Technical Consequences of Automation," which stated that intelligent machines could pose a threat to humanity if they were not specifically programmed against this risk.

Online Presence

On May 5th, 2016, LessWrong founder Eliezer Yudkowsky gave a talk at Stanford University titled "AI Alignment: Why It's Hard and Where to Start." On December 28th, 2016, the Machine Intelligence Research Institute YouTube channel uploaded a recording of the talk, which gathered more than 67,000 views over the next six years.



On February 28th, 2017, YouTuber Robert Miles released his first video on YouTube,[2] which introduced the channel as an place to explore ideas related to AGI systems and how to make them safe. Over the next several years, Miles released numerous videos discussing the challenges related to AGI alignment, including an episode titled "Intelligence and Stupidity: The Orthogonality Thesis" (shown below). The video was released on January 11th, 2018 and gathered more than 560,000 views over the next five years.



On October 6th, 2020, the book The Alignment Problem: Machine Learning and Human Values by Brian Christian was released. On November 9th, the British computer scientist Stuart Russell gave a TED talk titled “3 principles for creating safer AI” where he proposed several ideas for solving the alignment problem.



Search Interest

External References

[1] Science – Some Moral and Technical Consequences of Automation

[2] YouTube – Channel Introduction

[3]


Share Pin

Related Entries 79 total

Cleverbotsquare
Cleverbot
Eaccccc
e/acc (Effective Acceleration...
Cover1
AI Art
Cover8
GPT (AI)


Recent Images 2 total


Recent Videos 0 total

There are no recent videos.





Comments ( 0 )

    Meme Encyclopedia
    Media
    Editorials
    More