AI Alignment
Part of a series on AI / Artificial Intelligence. [View Related Entries]
This submission is currently being researched & evaluated!
You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.
About
AI Alignment refers to a field of study focused on the challenges surrounding the creation of an artificial intelligence (AI) that acts in accordance with human values and goals. The Paperclip Maximizer thought experiment is often used as an example of how alignment could be difficult due to instrumental convergence. References to the Alignment Problem are often used in memes joking about the conflict between artificial intelligence and humanity.
History
On May 6th, 1960, the journal Science[1] published an article by philosopher Norbert Wiener titled "Some Moral and Technical Consequences of Automation," which stated that intelligent machines could pose a threat to humanity if they were not specifically programmed against this risk.
Online Presence
On May 5th, 2016, LessWrong founder Eliezer Yudkowsky gave a talk at Stanford University titled "AI Alignment: Why It's Hard and Where to Start." On December 28th, 2016, the Machine Intelligence Research Institute YouTube channel uploaded a recording of the talk, which gathered more than 67,000 views over the next six years.
On February 28th, 2017, YouTuber Robert Miles released his first video on YouTube,[2] which introduced the channel as an place to explore ideas related to AGI systems and how to make them safe. Over the next several years, Miles released numerous videos discussing the challenges related to AGI alignment, including an episode titled "Intelligence and Stupidity: The Orthogonality Thesis" (shown below). The video was released on January 11th, 2018 and gathered more than 560,000 views over the next five years.
On October 6th, 2020, the book The Alignment Problem: Machine Learning and Human Values by Brian Christian was released. On November 9th, the British computer scientist Stuart Russell gave a TED talk titled “3 principles for creating safer AI” where he proposed several ideas for solving the alignment problem.
Search Interest
External References
[1] Science – Some Moral and Technical Consequences of Automation
[2] YouTube – Channel Introduction
Recent Videos
There are no videos currently available.
There are no comments currently available.
Display Comments