how to make an AI short film

How to Make a Short Film with AI: Part 2

In Part 2 of our blog series about making a short film with AI, we’re going to deep-dive a bit into how you can use AI tools to create your very own mini masterpiece. We’ll discuss the different AI tools I used to make The Bride, and walk you through some of my processes and workflows so you can begin stretching your own creative AI legs. (Being creatively productive is way better than doom-scrolling, trust me.)

Part 1 of our series is dealt with training ChatGPT to help you craft your story, screenplay, storyboard and dialogue. If you missed it, make sure to check it our here:

The AI Toolkit

But just so you know what you’re in for, here are all the tools we’ll be discussing, and the AI tools you will need to begin your short film creations. There are some Freebies and Freemiums, a few come at a price – but none will break the bank.

  1. ChatGPT to generate a script – ChatGPT is a large language model that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. We’ll be using ChatGPT to generate the script, storyboard & dialogue.
  2. Midjourney to create images – Midjourney is an AI image creation tool that can generate realistic and creative images. We’ll be using Midjourney to create our images, such as scenes, backgrounds, and characters.
  3. Runway Gen 2 to create videos – Runway Gen 2 is an AI video creation tool that can create videos from existing images or text descriptions or both. We will be relying on Runway Gen 2 to create our video scenes from our Midjourney images.
  4. Pika Labs as an alternative to Runway – a matter of personal taste and preference, Pika is another powerful text/image-to-video platform.
  5. ElevenLabs to generate voiceovers – ElevenLabs is an AI voiceover tool that can generate voiceovers from text. We’re going to use ElevenLabs to generate voiceovers, whether it’s narration or dialogue.
  6. Uppbeat to find the perfect music – Uppbeat is a royalty-free music and sound effects platform for creators, offering a wide range of music in different styles and moods. We’ll be relying on Uppbeat for the perfect score or soundtrack for our AI short film.
  7. Filmora 12 to edit and finalize – Once you have created all of the elements of your AI short film, you will need to edit and finalize it. This includes adding music, sound effects, and titles. For that, I recommend Filmora 12, it’s fun and easy to use, and inexpensive.
  8. Kaiber to change style – As a bonus, you can check out Kaiber’s powerful video-to-video transform AI to change up the whole look of your film.
making a short film with AI
It was a dark and stormy night.

Step 3: Generate Visuals Using Midjourney

Next for your film, you will need imagery. We’re not using any film equipment for this, instead, AI will be providing the images via an image generator. There are a plethora of generators out there, you may well have your favorite already. Mine is undoubtedly Midjourney.

Midjourney, with its remarkable ability to generate visuals from text descriptions, can (and did) play a pivotal role in bringing The Bride to life. Admittedly, it was a rather broad prompt that got me the image I decided I wanted to base the rest of the film around. (The keen mind might pause here to wonder how an image generator that runs on randomness will produce the sort of consistent imagery necessary for a cohesive film project…)

More on that later.

But before diving into Midjourney, ensure that your AI short film’s storyboard is well-prepared. Which will then help you prompt your imagery. Each scene should be accompanied by descriptive text that conveys the mood, setting, and actions of the characters.

Now, let’s explore how to integrate Midjourney into your AI short film creation process:

Create a Midjourney Account

Start by creating an account on the Midjourney Discord platform if you haven’t already. For best performance, I do recommend at least the minimum paid plan, which will give you access to more generations and the Midjourney Bot, where you can best keep track of your generating activities (among other things.) We have a whole section of articles that can guide you in the various techniques of generating images on Midjourney, if you’re new to the platform, definitely check them out.

Translate Storyboard Text to Prompts

Take the descriptive text from your storyboard and craft it into text prompts that Midjourney can understand. You can use something like ChatGPT to help you automate this, but Chat won’t know the parameter settings for Midjourney. So I would recommend using Chat to help with some of the prompt language, but to revise for Midjourney itself. For example, if your storyboard needs a scene set in a spooky laboratory with all the usual spooky laboratory accoutrements, Chat might tell you this prompt: An external shot of the laboratory amidst a thunderstorm, setting the intense atmosphere for the upcoming escape.

Midjourney does not require narrative explanations. Your prompt should take more of a keyword approach. You want to capture image style, subject, mood, and the important elements. Remember, Midjourney has remix, and inpainting functionality to assist in refining your results.

cinematic photography of Bride of Frankenstein, spooky laboratory, hazy blue light, electric elements, monochromatic

Beware of Abby Normal.

Input Prompts and Generate Images

Input your prompts into Midjourney using the /imagine command in the Midjourney bot or the proper #generate Discord channel (refer to the Midjourney rules in the Discord forum for basic functions). Midjourney will then generate a selection of four images based on your prompts. It’s important to note that the quality and relevance of the generated images may vary, so be prepared to iterate and refine your prompts for the best results.

Of course, maintaining visual consistency throughout your AI short film can be a daunting challenge. Managing to generate an image of the Bride of Frankenstein that perfectly captures her essence for one scene in the film, the challenge immediately arose with the need for her to appear consistent in subsequent scenes.

Midjourney, while a powerful tool for AI artistry, introduces an element of unpredictability due to its default settings (Midjourney randomness as we call it). These settings are designed to provide swift and diverse results, which can sometimes hinder the pursuit of visual continuity.

Enter the Midjourney Seed, a helpful tool to achieve a cohesive and visually engaging narrative. I have a whole article on how to use the Midjourney Seed HERE.

Runway Gen-2: making a short film with AI
Runway’s Image + Description – to -video generator interface.

Step 4: Using Runway Gen 2’s Image-to-Video Feature

Now to move to your video elements.

Runway is a leading platform in creating videos using AI. Specifically, it offers the ability to generate videos from text and images using their Gen-2 technology. If you’re unfamiliar with what this means, it means you can actually create a video based on simply a text prompt (i.e. the bride of Frankenstein in a spooky laboratory). Or you can create a video by uploading an image and Runway’s AI will transform it to moving footage.

Considering we just generated our scenes in Midjourney, this is the process we are going to follow, by systematically generating our AI short film scene-by-scene in Runway from our Midjourney outputs. There are different ways to go about this as Runway has a variety of tools to “direct” your film, including camera motion controls, single motion controls, and prompting tips and directions, all of which I will break down in a separate article.

For now, here’s a basic step-by-step guide on how to use Runway Gen-2:

  1. Select “Generate Videos” from the Menu: On the left sidebar, locate and click on “Generate Videos” under the “VIDEOS” section.
  2. Choose Generation Type: Runway offers two types of video generation: Gen1 and Gen2. Gen1 involves generating videos based on an existing video input, while Gen2 allows you to generate videos using text, images, or a combination of both.
  3. Select Generation Method (Text/Image to Video): You can choose either text or image as a starting point, since we’ve generated images for this purpose, we will start with image. Click on “IMAGE + DESCRIPTION” to select this option as the starting point for your video.
  4. Upload or Drag-and-Drop an Image: Drag and drop an image you want to use as the base for your video, or click on “upload a file” to browse and select an image from your computer. You can also choose an image from the available assets you have previously uploaded.
  5. Add Image Description: Provide a description for the image you uploaded and how you want it to animate. For example: Bride of Frankenstein examining skull. This description will be used to guide the AI in generating the video content.
  6. Configure Advanced Camera Controls: Use the “Advanced camera control” options to customize the camera movement and behavior during the video generation:
    • Horizontal: Adjust the horizontal movement of the camera.
    • Vertical: Modify the vertical movement of the camera.
    • Zoom: Control the zoom level of the camera.
    • Roll: Adjust the rolling movement of the camera.
    • Speed: Set the speed of the camera movement.
  7. Configure General Motion (Optional): Explore the “General Motion” section to further customize the motion of the video, although this is marked as BETA.
  8. If you have a paid plan, you can remove watermarks and upscale your video
  9. Experiment with different settings and input to achieve the video style you desire.

“We did the Monster Mash.”

Change Up Your Animation with Pika Labs

A short aside: Pika Labs is another powerful tool that can bring you through this same video-generating process. As a text/image-to-video platform with camera control parameters, some may argue it’s Runway’s main competitor. I’m not going to deep-dive here because our contributor Freddy Kraft has already tackled Pika in another article. Make sure to check it out:

Pika Labs: How to Make a Movie

Step 5: Adding Voices with ElevenLabs

So you’ve taken the time to write a script (or generate a script), and whether it’s a narration or dialogue, you will likely bump up against the need for voiceovers. Good news, AI can do that too. The significance of authentic and human-like voices for your AI short films cannot be overstated. So if you’re not already familiar, meet ElevenLabs, a frontrunner in “humanesque” text-to-speech technology.

I came across ElevenLabs when I was searching for the best AI voice synthesizer that would sound the most human. That means a voice that wasn’t robotic, had the ability for emotion and inflection and intuitive vocalization based on the material. Which is what ElevenLabs does. ElevenLabs’ Prime AI Text-to-Speech technology offers unparalleled realism in voice synthesis. It infuses your AI characters with voices so convincing that your audience may not be able to tell the difference from real humans. This level of authenticity adds depth and engagement to your short film project.

What’s more, ElevenLabs has a vast library of voices to choose from, including accents and foreign language voices. ElevenLabs actually has the ability to speak in a variety of different languages, rather than just provide accents in English. Below is a video I made to showcase some of the languages ElevenLabs can generate.

How to Use ElevenLabs:

Prepare your transcript: ElevenLabs is a text-to-voice tool, meaning you upload your text and the AI transforms the copy into a vocalization (voice-synthesis.) In order to do this, make sure your transcript is prepared. Edit out anything you do not want the voice to actually speak (scene descriptions, stage directions, etc.)

If you’re doing multiple character dialogue, make sure to structure the text for each character accordingly, as you can only generate one voice at a time. You may want to separate all dialogue by character, generate each character’s full script at once and then chop them up in post production. (That might be a lot of post work.) It may be better to generate one line at a time, so you have separate audio files instead of one giant file, depending on the amount of dialogue, you decide what works best.

Pro-Tips: ElevenLabs can intuitively interpret emotion and inflection based on the context of the text. That really shows when you’re generating something like a short story or novel chapter. However, when you’re generating straight narration (like I did with The Bride) or different characters (like Freddy’s Kraft’s horror short “I LOVE YOU SO” below), you might want to add direction into the text. For instance, if you want a character to say something a certain way, with excitement for instance, type: “Says excitedly:” before the actual lines of dialogue. That will indicate to the AI how to interpret the dialogue. (You will need to clip that piece of dialogue out in the editing process.)

ElevenLabs: making a short film with AI
ElevenLabs Speech Synthesis interface.

Choosing the Right Voices: One of the strengths of ElevenLabs is its diverse library of voices. You can preview voices in the library, and save the ones you like (even giving them new names per project and character voice. I have a voice named for every Pink Horn character so my voices stay consistent in my Pink Horn projects.) Pay attention to factors such as tone, pitch, and style to ensure a good fit with your AI characters’ personalities. You can even clone your own voice and upload it in the voice lab.

Customizing Speech Parameters: ElevenLabs offers customization options that allow you to fine-tune parameters like speech rate, pitch, and tone. Adjust these settings to align with the mood and context of each dialogue. This level of control enables you to create nuanced and dynamic conversations. Make refinements as necessary to achieve the desired impact.

Generating Audio: Once you’ve made your selections and customizations, use ElevenLabs to generate audio clips for each dialogue segment. As soon as you hit the ‘generate’ button, you’ve gone ahead and generated the text, which will deplete your word count credits. As with many of these tools, there are optional plans that determine the amount of credits you can use. If you have a limited number, be vigilant about the text you are generating, and make sure it is precisely what you want it to say. You can go back and regenerate, but you may be wasting your precious credits.

Step 6: Scoring Your Masterpiece with the Help of Uppbeat

Every content creator knows the importance of music in storytelling. However, the search for the ideal soundtrack can be daunting. Traditional music searches are often time-consuming and frustrating. You have a vision, a mood to evoke, and a story to tell, but finding the right music can be a roadblock.

Uppbeat is a royalty-free music and sound effects platform for creators. It offers a wide range of music in different styles and moods, including upbeat music. Creators can use the music and sound effects for their YouTube videos, TikTok videos, streaming videos, social media posts, podcasts, and more. Uppbeat offers a wide range of genres and styles, from energizing electronic beats to soul-stirring orchestral compositions, ensuring there’s something to suit every creative vision.

One of the tracks featured in The Bride.

Uppbeat’s AI Playlist Generator is one of its coolest features. This innovative tool uses the power of artificial intelligence to simplify and enhance your search for music tracks based on scene style, text, influence, mood and a variety of other parameters. You start by describing your video or scene in your own words, and the AI does the rest. It curates tailored playlists of copyright-free tracks that match your creative vision.

Unlike simple keyword matching, Uppbeat’s AI understands complex descriptions, translating abstract concepts into suitable music and organizing playlists with the most suitable tracks first. No denying it’s fun to explore Uppbeat’s ability to understand your unique needs. And its music catalog contains a slew of talent that will make you wish some of these tracks were on Spotify to jam to.

The Pink Horn team has been curating a playlist of some of our favorite Uppbeat finds over on YouTube for your listening pleasure.

Uppbeat stands out for several reasons. Not least of which is its offering of streamlined licensing agreements, ensuring that you can use music worry-free in your projects.

Filmora 12: making a short film with AI
User-Friendly Filmora 12 Interface

Step 7: Editing Your AI Short Film in Filmora 12

Once you have all the elements of your film, it’s time to start editing together your final project. There are a million and one ways to learn how to edit your video, I’m not going to deep-dive into that process in this article (YouTube is a treasure-trove of tutorials). You also have a variety of editing tools at your disposal depending on your poison–Adobe Premiere, DaVinci Resolve, etc. We at Pink Horn collectively enjoy Wondershare Filmora 12.

Filmora 12 is a powerful video editing software that can be used to edit all the components of your AI short film, from the footage to the audio to the effects. Here is some of the basic functionality of Filmora 12:

  • Import your footage: Filmora 12 supports a wide variety of video formats, so you can import your footage from any camera or device.
  • Trim and cut your footage: You can use Filmora 12 to trim and cut your footage to the desired length and remove unwanted parts.
  • Add transitions: Filmora 12 comes with a library of transitions that you can use to add smooth transitions between your clips.
  • Apply effects: Filmora 12 has a variety of effects that you can use to enhance your footage, such as filters, color correction, and motion blur.
  • Add text and titles: You can use Filmora 12 to add text and titles to your footage to create captions, credits, and other text elements.
  • Add audio: You can add audio to your footage, such as music, sound effects, and voice-overs.
  • Export your video: Once you are finished editing your short film, you can export it in a variety of formats, such as MP4, AVI, and MKV.

Like many platforms and programs as of late (and expect this to continue well into the future), Filmora 12 has added AI capabilities into some of its tools. Most notably is Smart Cutout, a feature that allows you to easily remove unwanted objects from your footage, such as people, cars, and trees, monsters. And then there’s AI Audio Stretch, This feature allows you to stretch or shrink audio clips without affecting their pitch. This can be useful for matching the audio to the length of your footage or for creating slow-motion or fast-forward effects.

Freddy Kraft’s Filmora 12 work session showcasing the Smart Cutout tool.

Filmora 12 also comes with a library of royalty-free music that you can use in your AI short films, which provides an alternative option to Uppbeat (the music in the work session above is a Filmora track). This can save you the time and hassle of finding and licensing music from other sources.

In addition, Filmora 12 comes with a library of video templates that you can use to quickly and easily create, including pre-made layouts, transitions, and effects, so you can get started editing right away. Or, if you want to bolster your assets, you can login to Filmora online and downlaod both free and paid packages with even more effects, music, themes, etc.

Overall, Filmora 12 is a powerful and versatile video editing software that can be used to create professional-looking short films. It has a wide range of capabilities, including both basic and advanced editing tools, as well as a number of cool features that can help you create unique and visually appealing videos. If you are looking for a video editing software, Filmora 12 is a great option to consider. You can download it for free to test it out, if you like what you see, plans start at $49.99/yr with a $79.99 perpetual option.

Switch Up Your Style with Kaiber

Re-creating The Bride with Kaiber

Feeling experimental? I have another tool for you.

Kaiber is an AI-powered video generation platform that combines advanced technology with artistic ingenuity. It lets users to transform their creative ideas into visually engaging videos, music visualizers, and interactive experiences in a fraction of the time and resources traditionally required. Developed by a team of AI researchers and artists, Kaiber Studio is at the forefront of the AI-video content creation landscape.

Similar to Runway, Kaiber offers both text-to-video and image-to-video generators. It’s presently famous for its metamorphosing AI zoom-style “flipbook” videos (you’ll know them if you see them.)

But Kaiber’s real superpower lies in its video-to-video capabilities:

Side-by-side of Runway and Kaiber using Kaiber’s video transform tool.

Kaiber’s Transform tool is a feature that allows users to unleash their creativity by applying a wide range of stylizations to their video clips. It’s a newer addition to the Kaiber toolkit (as of writing this) and offers an infinite array of possibilities for transforming videos. As you can see from the Bride video above, I took the scenes I generated in Runway Gen-2 and fed them through Kaiber. The result was the general scenes and motion of the original, only with a whole new style–which fully depends on my prompting and style selections inside Kaiber itself.

The Transform tool allows for imaginative transformations, from edgy cyberpunk aesthetics to mind-bending psychedelia and beyond, which can give a whole new edge to your vision. Experiment and explore, the possibilities are vast and open-ended, so get creative.

To use the Transform tool with video-to-video, start by selecting the Flipbook Video style in Kaiber’s lab and then upload the video you want to transform. Once your video is uploaded, you can prompt the desired subject and style for the transformation.

Pro-Tip: Be mindful of your prompt, especially where your subject is concerned and try to use some specific descriptors. For the above Bride video, consistently prompting for “the bride of Frankenstein” impressively gave me the same character over and over for each scene. Changing that subject prompt up, gave me something totally different. For instance above, I prompted for a “gothic skull bride,” and received a consistent character, though one completely unique from my bride of Frankenstein prompt, despite the video source being the same.

In Closing: Making a Short Film with AI

Before I leave you to your own creative devices, here are few last tips for making a short film with AI:

  • Be creative and experiment. There are no rules when it comes to making films with AI. So be creative and experiment with different tools and techniques.
  • Don’t be afraid to ask for help. There are a number of online communities and forums where you can get help with making films with AI.
  • Have fun! Making a short film with AI can be a lot of fun. So relax, have fun, and let your creativity flow.

Creating a short film with AI is an innovative journey that showcases the potential of combining various AI tools. From AI-generated storyboards and transcripts to visuals, scene renders, dialogue, music, and video editing, each component plays a crucial role in shaping the final product. There are limitless possibilities for creativity on the horizon. As AI continues to advance, we can expect even more groundbreaking tools to emerge, and the key lies in learning how to use them.

Scroll to Top