Sora, the New Text-to-Video Generator

Reading Time: 3 minutes
This is a snapshot of an AI-generated video of a woman walking that was created by Sora (Image Credit: OpenAI)

Artificial intelligence (AI) has taken over this generation. It has become the new sensation, the technology that has made headlines worldwide. The famous chatbot released by the company OpenAI on November 30, 2022 — ChatGpt— has now redefined AI, showing that machines can learn, interact, and have decision-making qualities. Its ability to transcend borders and languages, seamlessly integrating into the fabric of society made a difference. Its adaptability and continuous learning ensured it stayed ahead of challenges, while its framework protected it against misuse. As society embraced ChatGPT as a tool, its influence altered the trajectory of human progress.

Recently, the same company that formed ChatGpt ventured into the realm of AI-generated video— Sora— that can take any written command and turn it into a short video for the user. While the model isn’t available for public use yet, it has been “engaging with policymakers and artists” for the best possible interface when actual users start using Sora. If it worked properly upon release, Sora would signify a milestone in the evolution of AI technology. While the idea has been around for a while, Sora distinguishes itself through its exceptional video quality and attention-detail points. Industry experts praise its ability to produce lifelike and coherent videos, representing an advancement for OpenAI and AI in general. Furthermore, Sora’s arrival highlights the ongoing progress and innovation within the AI community, illustrating a commitment to technological advancement and creative exploration.

Still, while people are excited about what Sora can do, there are worries about how it might affect things ethically and in society. Like any new technology that changes, AI-generated video makes people think about how it could be misused or changed. Making realistic videos so easily might cause problems like fraud, propaganda, and misinformation, posing significant challenges to societal conflicts and tearing apart the community. Especially now, when people are really careful about what they see online, and there are lots of fake videos around. People generating this artificial fake footage could use it to potentially harass someone or even sway the election.  Dealing with AI versus real life is a problem that could be very harmful in various settings. For instance, a person could be accused of committing robbery, and AI could realistically create a CAM recording of the crime in action. Due to its uncanny resemblance to real life, it could very easily be used in court and wrongfully convict a person for a crime they didn’t commit. However, AI companies are collaborating with media networks and the government to keep the AI tool’s authenticity. 

They are discussing methods of preventing AI content from being stolen with the use of irremovable watermarks in the AI content. According to the New Scientist, “OpenAI has also taken steps to prevent its commercial AI models from generating depictions of extreme violence, sexual content, hateful imagery, and real politicians or celebrities.” These safety steps are just a few of the precautions taken by the company. Overall, Sora is beginning to show signs of a better future in AI, video generation helping launch AI and its attachments to human and their interactions. 

Written by Divya Saha

Share this:

You may also like...

X (Twitter)
LinkedIn
Instagram