Its only when you start to look a little deeper that you might start to notice something is amiss.
Its by far the most advanced model of its kind, converting text prompts into moving images.
Things have come a long way since the infamousWill Smith eating spaghettiReddit post surfaced in early 2023.
OpenAI
At the time of writing in March 2024, SORA is still in closed testing.
Spotting AI-generated photos and videos is more of an art than an exact science.
Remember that models are always evolving, so these traits will become harder to spot.
OpenAI
Sometimes the choice of subject and context of the video can make all the difference.
One example of OpenAIs SORA depicted a woman walking down aneon-lit Tokyo street.
The womans clothing in the opening scene shows a red dress with a full-length cardigan and a leather jacket.
OpenAI
The scene is dense, filled with reflections, and background actors which helps distract you from the gaffe.
Something else to watch out for are ghosts, or objects phasing in and out of existence.
OpenAIs video of agold rush California townprovides a good example of this.
OpenAI
Fingers and hand placement are particularly difficult for AI to pull off.
Generative models have a tendency to produce hands with more or less fingers than you’d expect.
Sometimes things don’t look quite right, fingers are very thin, or there are too many knuckles.
OpenAI
Look for glasses that dont seem to be symmetrical or that merge into faces.
In a video, they may even phase in and out of view and change between scenes.
Take a look at the Tokyo night scene video again.
OpenAI
At one point, a person seems to duplicate themselves.
In some areas, the walking animations are odd too.
Keep an eye out for suspect background activity to spot AI-generated video.
OpenAI
Sometimes youll notice natural objects like trees, fields, or forests interacting in strange ways.
Perspectives can seem off, sometimes moving objects dont quite line up with the path portrayed in the animation.
Another example is OpenAIsBig Sur coastline drone shot.
OpenAI
Have you ever seen a wave that looks that straight in nature?
Subjects may look perfectly lit in instances where youd expect them not to be.
More often than not the uncanny valley effect simply comes down to a feeling.
OpenAI
The aforementioned spaceman video is a good example of this.
Why is the animation seemingly played in reverse?
The knitted helmet I can excuse, but this thing has puzzled me since the moment I saw it.
OpenAI
The same goes for movements.
The SORAcat in bed videois impressive, but the movement isn’t right.
Cat owners will recognize that the behavior is strange and unnatural.
It feels like theres a mismatch between the behavior of the subject and the context of the situation.
Over time, this will improve.
Garbled text is another good example of what AI generative processes often get wrong.
Most generative models have active communities both on the web and on social media platforms like Reddit.
Find some and take a look at what people are coming up with.
On top of this, you couldgenerate your own images using a tool like Stable Diffusion.
AI-generated video is impressive, fascinating, and terrifying in equal measure.
Over time, these tips will likely become less relevant as models overcome their weaknesses.
So buckle up, because you havent seen anything yet.