ModelScope text2video
Part of a series on AI Video. [View Related Entries]
[View Related Sub-entries]
This submission is currently being researched & evaluated!
You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.
About
ModelScope text2video, also written ModelScope Text to Video, is an AI text-to-video-synthesis system that can generate videos and GIFs from text-based prompts. The application was finalized on the website Hugging Face in March 2023, leading to its viral usage within AI art communities online, similar to precursor applications like DALL-E mini, Midjourney and AI Voice generators such as ElevenLabs. Discourse about ModelScope Text to Video and videos created with it surfaced on Reddit, Twitter and YouTube following its release. Many of the videos that were generated by it had visible Shutterstock watermarks, being a symptom of the videos that it learned from.
History
On March 19th, 2023, Redditor Illustrious_Row_9971 uploaded a post to /r/StableDiffusion[12] that stated, "First open source text to video 1.7 billion parameter diffusion model is out," and earned roughly 2,000 upvotes in nine days. On March 19th, 2023, Twitter[1] user @_akhaliq tweeted a Hugging Face[2] link to the newly made "1.7 billion parameter text to video generation diffusion model" called ModelScope Text to Video Synthesis. The tweet received roughly 1,200 likes in nine days and started a thread by @_akhaliq that included multiple replies with samples of AI-generated MP4s turned GIFs made with ModelScope, the first[3] of which showed the prompt "A teddy bear running in New York City" (shown below, left). The second[4] GIF in the thread showed "An astronaut riding a horse" (shown below, right).
Later on March 19th, 2023, the head of product design at Hugging Face, Victor Mustar, posted a ModelScope-generated video to Twitter[5] that created a Star Wars clip. It received roughly 613,900 views and 1,600 likes in nine days (shown below).
I just made my own Star Wars clip using AI (text-to-video)! pic.twitter.com/Yj8HGUl5Lf
— Victor M (@victormustar) March 19, 2023
YouTubers within the AI art community started to make videos about ModelScope, such as YouTuber[6] Business Disruptors on March 19th, 2023, who started his video by urging his viewers to not make fun of the ModelScope videos because, like Midjourey-generated images, they would likely progress rapidly. The video received roughly 3,100 views in nine days (shown below, left). Also on March 19th, YouTuber[7] Matt Wolfe shared a video about ModelScope, gaining roughly 160,700 views in nine days (shown below, right).
Going into late March 2023, videos made with ModelScope surfaced on Reddit and elsewhere. For instance, on March 19th, 2023, multiple ModelScope-generated videos were shared on /r/StableDiffusion.[8][9] The posts gained more engagement on the subreddit in the following week. For instance, on March 27th, Redditor chaindrop shared a ModelScope-generated video to /r/StableDiffusion[10] showing Will Smith eating spaghetti. In less than a day, the post gained roughly 5,200 upvotes. The video was then reposted to Twitter[11] by @MagusWazir, earning roughly 116,900 views and 2,400 likes (shown below).
"Will Smith eating spaghetti" generated by Modelscope text2video
credit: u/chaindrop from r/StableDiffusion pic.twitter.com/ER3hZC0lJN— Magus Wazir (@MagusWazir) March 28, 2023
Features
The ModelScope text2video generation diffusion model consists of three sub-networks: text feature extraction, text feature-to-video latent space diffusion model and video latent space to video visual space.[13] The overall model parameters are about 1.7 billion.[13] The diffusion model adopts the Unet3D structure and realizes the function of video generation through the iterative denoising process from the pure Gaussian noise video.[13]
To use ModelScope, one inserts a prompt into the text bar on its Hugging Face[2] page and waits roughly 60 seconds for it to generate one, two-second-long video. In longer videos, such as the one posted to Twitter[5] by Victor Mustar, the video creator generated multiple clips within ModelScope and then stitched them together in a separate editing software.[14]
Additionally, creators such as Twitter[15] user @Mrboofyy showed how they'd purportedly taken a ModelScope-generated video and then re-rendered it in Stable Diffusion to achieve a higher-quality result (shown below).
I took a modelscope video (link with a machine gun) and re-rendered it in Stable diffusion to see what happened 😳#modelscope #colab @StableDiffusion #midjourney #ai #aiart #neuralnetworks pic.twitter.com/BJW73pjw5x
— Mrboofy (@Mrboofyy) March 20, 2023
Highlights
Search Interest
External References
[2] Hugging Face – ModelScope Text to Video
[5] Twitter – @victormustar
[6] YouTube – Text to Video AI is Finally Here 😮 And You Can Try It
[7] YouTube – Actual AI Text-To-Video is Finally Here!
[8] Reddit – /r/StableDiffusion
[9] Reddit – /r/StableDiffusion
[10] Reddit – /r/StableDiffusion
[11] Twitter – @MagusWazir
[12] Reddit – /r/StableDiffusion
[13] Hugging Face – modelscope-damo-text-to-video-synthesis
[14] Twitter – @victormustar
Recent Videos
There are no videos currently available.
Top Comments
Nox Lucis
Mar 28, 2023 at 04:31PM EDT
answearingmachine
Mar 28, 2023 at 04:36PM EDT