DEV Community

Cover image for Using the "Dream Machine" Video Generation AI Service via Web API
nabata
nabata

Posted on

Using the "Dream Machine" Video Generation AI Service via Web API

Introduction

Recently, the Web API for Dream Machine was released. In this article, I’ll walk you through how to use it.

Logging In and Purchasing Credits

Dream Machine 1

First, log in with your Google account.

Dream Machine 2

You can purchase credits from the Billing & Credits page by selecting Add More Credits. You can specify an amount ranging from $5 to $500. For further details, check out the Dream Machine API Pricing page.

Creating an API Key

Dream Machine 3

Next, create an API Key. Note that once it’s generated, you won’t be able to view it again, so make sure to store it securely.

It is your responsibility to record the key below as you will not be able to view it again.

Setting Up the Environment

I’m running macOS 14 Sonoma, and the version of Python installed on my machine is:



$ python --version
Python 3.12.2


Enter fullscreen mode Exit fullscreen mode

Install the Python SDK as follows:



$ pip install lumaai


Enter fullscreen mode Exit fullscreen mode

You’ll also need the requests package for making HTTP requests:



$ pip install requests


Enter fullscreen mode Exit fullscreen mode

Check the installed versions:



$ pip list | grep -e lumaai -e requests 
lumaai             1.0.2
requests           2.31.0


Enter fullscreen mode Exit fullscreen mode

To avoid hard-coding the API key, I stored it as an environment variable named LUMAAI_API_KEY:



export LUMAAI_API_KEY=your obtained API Key here


Enter fullscreen mode Exit fullscreen mode

Generating a Video from Text

Now, let’s generate a video from text by referring to the Text to Video section of the Python SDK Guide. For more details, check the API Reference.



import os
from lumaai import LumaAI
import requests

client = LumaAI(
    auth_token=os.environ.get("LUMAAI_API_KEY")  # Get the API Key
)

generation = client.generations.create(
    prompt="A teddy bear in sunglasses playing electric guitar and dancing",
    aspect_ratio="16:9",
    loop=True  # Enables looping
)

# The state can be one of queued, dreaming, completed, or failed
while generation.state != "failed":
    generation = client.generations.get(generation.id)  # Re-fetch the status
    if generation.state == "completed":
        response = requests.get(generation.assets.video)
        with open(generation.id + ".mp4", "wb") as file:
            file.write(response.content)
        break

# You can retrieve a list of all previous requests like this:
# print(client.generations.list())


Enter fullscreen mode Exit fullscreen mode

For this example, I used the prompt "A teddy bear in sunglasses playing electric guitar and dancing".

I set the aspect ratio to 16:9 and enabled loop to make the video loop smoothly. This ensures that the first and last frames match.

Here’s the generated video:

The result perfectly matches the prompt. Pretty impressive!

Generating a Video from an Image

Next, I tried generating a video from an image, based on the Image to Video section of the Python SDK Guide.

You should upload and use your own cdn image urls, currently this is the only way to pass an image

This means you’ll need to upload your image to a server that can provide a URL.

For this test, I uploaded the following image to a server:

Original Image

It’s possible to specify different images for the first and last frames, but I only used an image for the first frame.

Here’s the code, with the prompt "Japanese woman smiling". I did not set the loop option this time.



import os
from lumaai import LumaAI
import requests

client = LumaAI(
    auth_token=os.environ.get("LUMAAI_API_KEY")  # Get the API Key
)

generation = client.generations.create(
    prompt="Japanese woman smiling",
    keyframes={
        "frame0": {
            "type": "image",
            "url": "Specify_image_url_here"
        }
    }
)

# The state can be one of queued, dreaming, completed, or failed
while generation.state != "failed":
    generation = client.generations.get(generation.id)  # Re-fetch the status
    if generation.state == "completed":
        response = requests.get(generation.assets.video)
        with open(generation.id + ".mp4", "wb") as file:
            file.write(response.content)
        break


Enter fullscreen mode Exit fullscreen mode

Here’s the video that was generated:

Once again, the prompt was accurately reflected in the output.

It’s fascinating to see how far AI technology has come every time I test something like this.

Conclusion

Being able to generate videos purely through code is incredible.

I’m excited to experiment further and see what else is possible with this tool!

Japanese Version of the Article

動画生成AIサービス「Dream Machine」をWeb APIで呼び出してみた

Top comments (0)