Luma ai Dream Dream Machin, is it all hype or does it compare well to open ai sora?

I put Luma Dream Machine to the test and here’s how it stacks up to Sora

Is Luma Ai's Dream Machine all Hype?

Christopher Brady The Creator of BytesToolKit and Future Fusion Games

- Writen by Christopher Brady

"This is the best video generator I have tested!"

"A futuristic cute 3d robot sitting in front of a laptop waving at the camera"

Luma Labs, which introduced Genie; the generation 3d model, is now pioneering AI-generated video with Dream Machine that could be close to the quality of OpenAi's Sora.


Luma's servers were swamped due to the public demand for test privileges, so I had to wait until the queue lowered. Having gone through one night of staying in this manner, however, I can affirm that the "dreaming" process only takes about two minutes once you reach the front of one queue.


Some early users shared videos on Social Media that almost seemed too good to be true, even suspiciously like they were cherry-picked to show off the model. However, after trying it myself I can assure you that the results are indeed very good.


Dream Machine might not yet have reached the level of Sora or even Kling, it still stands as an outstanding model for AI understanding of motion and immediate grasp. Importantly, as of today and unlike Sora you can use Dream Machine.


Each video made by Dream Machine lasts about five seconds - almost twice as long as those produced by Runway or Pika Labs and with no need for extensions. The Model versatility is impressive, without extensions and there is evidence of some videos with more than one shot.

I tested it and made some videos with it. The first one was ready in just three or four minutes, the others took half an hour. There are some flaws in the blending and blurring between pixels, but by far the movement was better than any model I've tried.


  • Walking
  • Dancing
  • Running.

Previous models would show people walking back the way they came or they'd pan around a song and put a dancer standing still into perspective. Not the case with Dream Machine.


Dream Machine executed the concept of the subject in motion with no need to say what direction the subject is moving. It was particularly good at running.


But once you typed the prompt, your commands were minimal and you had no fine-tuned or individual controls.


This may be due to a new model, but the prompt calls all the shots, and even the natural language model enhanced them!


Ideogram and Leonardo use this same method in image generation. The effect is to make the question more explicit on what you want to see.


It may also be a feature of video models built on different diffusion technologies from straight diffusion. Haiper, the UK-based AI video startup also says its model works best when you let the prompt do the work, and Sora is said to have little more than a simple text prompt with very few additional controls.

Testing the Dream Machine's Prompt Adherence

  • Prompt Adherence Testing

First I came up with a series of prompts to test out Dream Machine.


Random Prompt with their Idea Feature

Detailed Prompts

Image to Video

Comparing OpenAi Sora Showcase



Luma Ai Prompt Shreenshot Creating a Video with their Ai Generator Dream Machine


Random Prompt with AI "Idea" Feature

Prompt: "A beautiful woman laughing underwater, wearing a bikini and snorkle, her expression denotes calm and happiness"

For this video I used the feature "idea". I wanted to see what Luma Ai could come up with using it's random prompt.

The prompt: "A Beautiful woman laughing underwater, wearing a bikini and snorkel, her expression denotes calm and happiness ".


This was the first video I created and I think it did fairly well in quality. the movement is dynamic and the the physics are realistic, however... the snorkel was missing from the video. Even without the snorkel, this was better than every other video generator I have tested in the past.


The second video was much better with the amount of movement in the background and the adherence to the prompt, and with only slight morphing... Truly I was impressed by how well this new tool is handling movement.

The Detailed Prompt Results

Prompt: "A beautiful woman wearing a dress at a ballroom, her face shows happiness, and flirting, she grabs a wine glass looking away from the camera, she looks back and blows a kiss to the camera, dynamic background of people dancing, talking, and walking"

I wanted to see what a video would be like if.I prompted with multiple elements like movement, descriptions of the subject, emotions, actions, and background control.


I was really impressed by this adherence to the prompt and while this Ai video generator is definitely beating the quality of many others, so far OpenAi's showcase of Sora seemed a lot better. There was minor morphing, but I wanted to showcase the results of the tool without cherry picking so the comparison had to be a little more direct.


I thought I would try out one of the prompts on Open AI's website, and I was pleasantly surprised! I mean there is no way as of the date writing this article to text exactly how well Sora is but normally when you use a video generator such as runway, or pika you have to generate a few times to get these results!

Image to Video using the same Prompt

I made an image using Midjourney.


  A woman looking at the camera wearing an intricate dress in a ballroom image made with midjourney


  • prompt: "Beautiful woman in a ballroom looking at the camer, intricate dress, people in the background".

It's amazing how it changed the details of the generated video that much! I didn't put too much effort into the prompt but I think it turned out fine to test Luma Labs Ai new video generator! 


You should honestly try the Dream Machine for yourself this video does have some morphing, but with a refined prompt it can definitely be used for multiple purposes.

Directly Comparing OpenAI Sora Prompt to Luma AI Dream Machine

Prompt: "A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about."

Now remember this was my first generation with this promo, this ai video generator is starting to really impress me. It is definitely not on the level of the Sora showcase on OpenAI's website, but honestly for what is out right now in generative AI for video that people can actually use in their content creation process the only thing I have seen that comes even close is Haiper.


It is hard to believe this is the launch, usually AI tools using generative fill do not start out at this level of quality. Imagine the possibilities. You can make Music Videos, B-Roll for YouTube, and a dynamic story with visuals. There is no doubt in my mind AI will allow anyone to eventually make full length films with a single text prompt.

A man in shock seeing something amazing on his laptop
Created with Leonardo Ai
Back to blog

Leave a comment

Please note, comments need to be approved before they are published.

Making content go viral is easier than ever with these Ai Tools!

Unlock the full potential of crafting YouTube and TikTok videos beyond mere monetization or joining the creator program. Personalize them with YOUR OWN music created using Suno AI!