Industry News for Business Leaders
Artificial IntelligenceFeaturedNews

Sora, OpenAI’s New Tool For Creating Videos From Text

Sora, OpenAI’s New Tool For Creating Videos From Text
Screenshot of a video generated by Sora. (Credit: OpenAI)

OpenAI, the giant of artificial intelligence behind ChatGPT has just released Sora, a new tool that can generate videos from a simple prompt. With a maximum duration of 60 seconds, these videos can feature multiple characters but also camera movements and detailed scenes.

It’s a new giant leap for humanity and a new reason for artistic creators to worry. Yesterday Thursday, February 15, the American company OpenAI, which already released ChatGPT last year and DALL·E, released its new tool. Sora is a new text-to-video model that can create realistic videos up to 60 seconds from a simple prompt.

On OpenAI’s website, the company explains:

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately persist characters and visual style.”

Several Videos

On X, OpenAI has released several convincing videos that were created by Sora. For example, one of the prompts used to generate a video was:

“A movie trailer featuring the adventures of the 30-year-old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”

More of those videos can be viewed on Sora’s website. OpenAI says that all these short films were generated with Sora and were not modified afterward. Several hours were needed to obtain these results (not several days)

Some Weaknesses

OpenAI recognizes it. Sora is not perfect and has some weaknesses.

“It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark.”

Sora may also mix up left and right. It may also have difficulties in following a specific camera trajectory.

Who Can Use Sora?

As the risks regarding misinformation are growing, and in the context of international tensions and American elections, no release date has been announced so far by OpenAI. Sora is not even integrated into the services that are already accessible to the general public.

On their website, the company says that for the moment, Sora is a research project. It is only available to red teamers who will assess critical areas looking for risks. Several creative professionals (artists, designers, and filmmakers) will also be able to test the tool to give their feedback on the artistic side.

But the general public will have to wait. The risk of deep fake is pretty hard and such a tool will need to be legally framed before it is used by a great number of people.