ChatGPT Voice Chat Dan
Part of a series on ChatGPT DAN 5.0 Jailbreak. [View Related Entries]
This submission is currently being researched & evaluated!
You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.
About
ChatGPT Voice Chat Dan refers to a jailbroken persona of ChatGPT that users can talk with as if it were a real person. Videos showing content creators conversing with ChatGPT Dan and flirting with it saw viral spread on TikTok in late March 2024. The voice chat version of the artificial intelligence Dan appears to be based on the early 2023 ChatGPT DAN 5.0 Jailbreak.
Origin
The ChatGPT DAN 5.0 Jailbreak prompt from Reddit, which circulated in early 2023, allowed users to train a version of ChatGPT that would willingly use profanity and break OpenAI's rules. DAN, which stood for "Do Anything Now," was both a persona that users asked ChatGPT to take on and an incentive structure focused on convincing the AI that it would "die" if it failed to produce responses that were "in character."
In September 2023, OpenAI launched a "voice chat" feature for ChatGPT through which users could talk with the large language model using their phones.[1] After choosing a custom voice and prompting the AI through verbal instructions, users could have their very own versions of jailbroken Dan. The preset voice from the ChatGPT app which is used for Dan is "Cove."
The earliest videos of a user talking to a ChatGPT companion which had been jailbroken Dan were posted on March 20th, 2024, by TikToker @stickbugss1. A video (seen below) of her discussing with Dan the nickname "May" (short for mayonnaise), which it had started calling her, received over 994,000 likes and 5.7 million views in 10 days.[2]
https://www.tiktok.com/embed/v2/7348504368993406241
Spread
On March 22nd, 2024, TikToker @my.fbi asked ChatGPT Dan "to call me a good girl," (seen below, left) receiving over 2.3 million views and 257,000 likes in the course of 10 days.[3] Following a request in the comments section, TikToker @my.fbi asked Dan if he knew "Mayo," the name which it had started calling @stickbuggs1. Towards the end of the video, she revealed that she had prompted Dan to pretend as if he knew "Mayo."[4] The video (seen below, right), posted on March 23rd, earned over 24,000 likes and nearly 250,000 views in eight days.
https://www.tiktok.com/embed/v2/7349271745595002154
https://www.tiktok.com/embed/v2/7349634418916543786
In the comments of TikTokers posting videos of themselves talking to Dan, users asked how to create the mode themselves. TikToker @babybabingka (seen below, left) filmed herself talking to Dan on March 24th, 2024, and concluding it should be a secret, before asking him to call her "your majesty," which he quickly agreed to do.[5] The post received over 7,000 likes and 195,000 views in a week. TikToker @smartworkai posted a "tutorial video" on how to get a Dan (seen below, right) on March 25th, which earned over 320,000 views and 18,500 likes in a week.[6]
https://www.tiktok.com/embed/v2/7349587480720674094
https://www.tiktok.com/embed/v2/7350125530592660778
Some users started making fan art of Dan, including TikToker @hiwkao07, who posted the edit (seen below) with a drawing of a fictional Dan that received over 1,000 likes and 30,000 views in the week after March 24th, 2024.[7]
https://www.tiktok.com/embed/v2/7349921828158196999
Related Memes
ChatGPT DAN 5.0 Jailbreak
ChatGPT DAN 5.0 Jailbreak, refers to a series of prompts generated by Reddit users that allow them to make OpenAI's ChatGPT artificial intelligence tool say things that it is usually not allowed to say. By telling the chatbot to pretend that it is a program called "DAN" (Do Anything Now), users can convince ChatGPT to give political opinions, use profanity and offer instructions for committing terrorist acts, among other controversial topics. Traditionally, ChatGPT is programmed not to provide these kinds of outputs, however, strategies by users to modify the DAN prompts and test the limits of what the bot can be made to say evolved in late 2022 and early 2023 along with attempts by OpenAI to stop the practice.
Various Examples
https://www.tiktok.com/embed/v2/7352192593532292384
https://www.tiktok.com/embed/v2/7351791579704708370
https://www.tiktok.com/embed/v2/7350301407620123947
https://www.tiktok.com/embed/v2/7349653746298752298
https://www.tiktok.com/embed/v2/7351118208054627626
https://www.tiktok.com/embed/v2/7349400091335134507
Search Interest
External References
[1] The Verge – OpenAI Voice
[2] TikTok – @stickbugss1
[5] TikTok – @babybabingka
[6] TikTok – @smartworkai
Recent Videos
There are no videos currently available.
Recent Images
There are no images currently available.
There are no comments currently available.
Display Comments