ModelScope Text to Video AI Videos examples.

ModelScope text2video

Part of a series on AI Video. [View Related Entries]
[View Related Sub-entries]

Updated Mar 29, 2023 at 12:03PM EDT by Zach.

Added Mar 27, 2023 at 12:04PM EDT by Owen.

PROTIP: Press 'i' to view the image gallery, 'v' to view the video gallery, or 'r' to view a random entry.

This submission is currently being researched & evaluated!

You can help confirm this entry by contributing facts, media, and other evidence of notability and mutation.

About

ModelScope text2video, also written ModelScope Text to Video, is an AI text-to-video-synthesis system that can generate videos and GIFs from text-based prompts. The application was finalized on the website Hugging Face in March 2023, leading to its viral usage within AI art communities online, similar to precursor applications like DALL-E mini, Midjourney and AI Voice generators such as ElevenLabs. Discourse about ModelScope Text to Video and videos created with it surfaced on Reddit, Twitter and YouTube following its release. Many of the videos that were generated by it had visible Shutterstock watermarks, being a symptom of the videos that it learned from.

History

On March 19th, 2023, Redditor Illustrious_Row_9971 uploaded a post to /r/StableDiffusion[12] that stated, "First open source text to video 1.7 billion parameter diffusion model is out," and earned roughly 2,000 upvotes in nine days. On March 19th, 2023, Twitter[1] user @_akhaliq tweeted a Hugging Face[2] link to the newly made "1.7 billion parameter text to video generation diffusion model" called ModelScope Text to Video Synthesis. The tweet received roughly 1,200 likes in nine days and started a thread by @_akhaliq that included multiple replies with samples of AI-generated MP4s turned GIFs made with ModelScope, the first[3] of which showed the prompt "A teddy bear running in New York City" (shown below, left). The second[4] GIF in the thread showed "An astronaut riding a horse" (shown below, right).


shulock forbe

Later on March 19th, 2023, the head of product design at Hugging Face, Victor Mustar, posted a ModelScope-generated video to Twitter[5] that created a Star Wars clip. It received roughly 613,900 views and 1,600 likes in nine days (shown below).


YouTubers within the AI art community started to make videos about ModelScope, such as YouTuber[6] Business Disruptors on March 19th, 2023, who started his video by urging his viewers to not make fun of the ModelScope videos because, like Midjourey-generated images, they would likely progress rapidly. The video received roughly 3,100 views in nine days (shown below, left). Also on March 19th, YouTuber[7] Matt Wolfe shared a video about ModelScope, gaining roughly 160,700 views in nine days (shown below, right).



Going into late March 2023, videos made with ModelScope surfaced on Reddit and elsewhere. For instance, on March 19th, 2023, multiple ModelScope-generated videos were shared on /r/StableDiffusion.[8][9] The posts gained more engagement on the subreddit in the following week. For instance, on March 27th, Redditor chaindrop shared a ModelScope-generated video to /r/StableDiffusion[10] showing Will Smith eating spaghetti. In less than a day, the post gained roughly 5,200 upvotes. The video was then reposted to Twitter[11] by @MagusWazir, earning roughly 116,900 views and 2,400 likes (shown below).


Features

The ModelScope text2video generation diffusion model consists of three sub-networks: text feature extraction, text feature-to-video latent space diffusion model and video latent space to video visual space.[13] The overall model parameters are about 1.7 billion.[13] The diffusion model adopts the Unet3D structure and realizes the function of video generation through the iterative denoising process from the pure Gaussian noise video.[13]

To use ModelScope, one inserts a prompt into the text bar on its Hugging Face[2] page and waits roughly 60 seconds for it to generate one, two-second-long video. In longer videos, such as the one posted to Twitter[5] by Victor Mustar, the video creator generated multiple clips within ModelScope and then stitched them together in a separate editing software.[14]

Additionally, creators such as Twitter[15] user @Mrboofyy showed how they'd purportedly taken a ModelScope-generated video and then re-rendered it in Stable Diffusion to achieve a higher-quality result (shown below).


Highlights


Shutte STOCK utter

Search Interest

External References

Recent Videos

There are no videos currently available.

Recent Images 20 total


Top Comments

Nox Lucis
Nox Lucis

"Computer, synthesize for me several clips of cellphone footage of Ukrainian soldiers commiting sexual assault of Russian children in the Donbass. Keep lengths longer than 10 seconds but no more than 30 seconds, vertical frame, and don't splurge on quality. Keep it just enough to convince the average Russian news viewer. Message me when it's done."

+30

+ Add a Comment

Comments (36)


Display Comments

Add a Comment


Word Up! You must login or signup first!