Do you guys remember when I wrote that one piece on how to remix Midjourney prompts for that creative spark? That’s honestly how all my contributions start. Every now and then, our All Mighty Editor makes a request. As a mere mortal, I must obey. SO. I’m gonna make a horror short with Pika Labs and show you guys exactly how I do it.
Pika Labs, is an AI Text-to-Video platform. You simply input your written text into the platform, and Pika Labs does the rest. It transforms your words into videos. It’s been one of my go-tos to help me create my horror AI shorts, like “I L O V E Y O U S O” below.
In all honesty, that was the horror short I was going to write my blog about, but it turned into 10 pages of fear, confusion, and anger. Like taking a shower in a cheap hotel room… okay, yeah…
Let’s begin.
The Idea
Full transparency, I didn’t get too creative with “the idea” in question. Mrs. Editor was enjoying the heck out of her two sentence horror stories, and the hype is contagious, so, I set off to find some 2-sentence inspo of my own with my best friend in the entire world: Google. (Did you know Google is using AI to explain search results these days? Did you see that? It pops up at the top of the page when you search. Google learned the lessons Blockbuster didn’t: get with the trend. But I digress. I do this.)
With AI’s current capabilities, I find that cartoon style renderings work best for my personal taste, but also for my results. The AI video tools accessible to the masses right now are still in their early development stages; they’re evolving and their graphics are improving with each new update, but they still have rendering challenges… like human faces. Did I say that?
My two bffs of AI video, Pika Labs and Runway Gen-2 both seem to like the bolder lines of cartoon art better, and they both, for the most part, seem to keep their toes out of the Uncanny Valley with this style.
So, in plundering Google for horror short film ideas, I came across a few horror stories I liked, but they didn’t prompt well into Midjourney, at least not in a way that I wanted. AI has come a long way, but it isn’t perfect. You must be flexible and be willing to try new ideas, new vocabulary, and new ways of thinking. Unless you have 0 expectations, prompting isn’t magic, it takes some strategy…
FOCUS PIVOT: The Challenges of Prompting
Will the real Hook Man please stand up?
I have to tangent for a sec on this prompting subject. It can be harder than it sounds. For instance, the Pink Horn team has been collaborating on a blog about Urban Legends for Spooky Season, and at least four of us have been trying at length to prompt one specific urban legend for the better part of a week.
NOTHING IS WORKING.
You would think that the Hook-Hand Man legend would be common knowledge considering its enduring longevity and undisputed place of honor in Alvin Schwartz‘s consummate masterpiece Scary Stories to Tell in the Dark (not to mention this gem: man door hand hook car door), right? RIGHT?
NOT TO MIDJOURNEY, IT ISN’T. NO SIR. NEVER HEARD OF HIM.
The absolute mess of photos we have generated has filled our Pink Horn group chat with laughter and anger–MJ’s Soup du Jour for anything involving the Hookman urban legend. We’ve tried every version of his name, we’ve tried pirates, we’ve tried ‘Captain Hook’, we’ve tried inpainting arms, remixing pre-existing generations, referencing photos, prompting Lover’s Lane and Texarkana.
Big. Ol’. Goose egg.
As brain-bashing as this has been, it has taught me how to look outside the photo I want, to get the wording that Midjourney (or in the case above, Leonardo.AI) needs for its prompt. I want Hookman and MJ says “Who?” So who else has a hook for a hand? Can I prompt a more well-known character? Can I prompt the aesthetic and inpaint the hook into the photo? What else is he known for? What buzzwords would make Midjourney say: “OH! You mean THAT Hookman?”
This has not happened for us yet with Hookman.
I don’t think it will.
This is our most desperate hour. Help us, Obi-Wan Kenobi, you’re our only hope.
END PIVOT: Back to the Blog I’m Supposed to Be Writing
So in that vein, after unsuccessfully toiling through different two-sentence horror stories to develop the base idea for my film, I found one that I liked quite a bit. Thank you, CheeseeKimbap.
How you storyboard your horror short in order to figure out your scenes is up to you. (There’s a much more in-depth look at how to do this with AI over here). I find that picking a title gets me hyped up for putting the video together and can lead to more of those lil’ brain babies along the way. Therefore, I need a title for this short.
Pro Tip- thesaurus.com is your best friend.
I love it. Now, let’s make it.
The Production: CHAOS
My next step in storyboarding my shorts: Chaos. There is no next step. I just start making scenes. Of course, it you want to be more organized, you can use something like ChatGPT to help you organize your scenes into a storyboard, and then start generating your scenes from that reference. My scenes are in my head most of the time, so I yolo it into prompts and just start rolling.
Midjourney: Seeding & Rendering Images
For imagery, I use Midjourney. The whole Pink Horn team has a love affair with Midjourney (it’s hard not to). But there are more art generator options out there if you’re not feeling the MJ vibes. Of course with text-to-video or image-to-video tools like Pika Labs and Runway Gen-2, if you have non-Ai generated art or photos you want to use to create movies with, go to town. This is not limited to AI content only, these tools are here to help you bring your creative visions to life.
Picture 1 is perfect. I’m going to upscale it –that’s the “U1” button. And while I’m here, I’m gonna grab the seed real quick. This will help keep the art style similar throughout the entire short. If you’re unfamiliar with how to use the seed, we’ve got an article that tells you all about it.
Finding the seed in Midjourney involves replying to the Midjourney bot with the envelope emoji.
A seed helps tame the randomness in Midjourney generations, and gives you better results with more consistency. As you can see in the third image, U4 got really close to my original couple’s style. That’s the seed at work. By including it in your prompt, your results will have more cohesion.
Now that you know that, continue rendering your scenes this way. Make sure once you’ve upscaled, to open each image fully in your browser for the largest resolution, then save. You can be organized and name and number each scene file.
I–am not organized.
Once you have your scenes, let’s load it into Pika Labs.
Pika Labs: Rendering Video
Pika Labs is a FREE text-to-video and image-to-video AI tool that runs on the Discord server, same as Midjourney. It offers two primary methods to kickstart the video generation process. If you want a one-on-one interaction where you see only your video generations and keep track of all your work, you can directly communicate with the Pika bot. OR you can suffer the chaos of Discord’s #generate threads where you can see everyone else’s generations and maybe yours, if it doesn’t fly by too fast and get lost in the melee. Don’t be fooled, this is not a collaborative situation, this is madness run amuck. Though maybe not as maddening as Midjourney’s version of this Discord insanity. (Pika does highlight your result, I’ll give them that.)
Take my word on this: USE THE BOT.
So inside the Pika bot, you can upload your image to the Discord server by typing the /create or the /animate command, and then hitting that +1 more option to bring up the image uploader. Locate, or drag and drop your scene image and add a text prompt to tell Pika Labs how you want to animate it.
I’m going to prompt for minimal movement for now. I want the first scene to convey normality for the characters.
One of the newer features that Pika Labs has introduced (as of writing this) is the /animate command. This command lets you incorporate images into your video prompts. Unlike traditional methods, you no longer need to type out a prompt; you can simply attach an image and let Pika do the rest. I prefer to go this route first, just see what Pika does with the scene. Then I adjust it to fit my story.
However, on this occasion, when I ran the photo through with no parameters, it gave me some strange leg movements and a windblown tablecloth.
Time to get picky and add a text prompt into the mix.
When prompting Pika, you can’t get crazy. If you’re uploading an image of a couple, you can’t suddenly tell Pika it’s a rocket ship. Say it’s a couple. Describe what’s in the image and how you want it to move.
After a few more tweaks, peeks, pokes, and prods, this was the prompt I ended up using:
Pika Labs: Parameters
Let’s get a bit granular for a second.
Pika Labs recently rolled out their camera control feature, denoted by the -camera parameter, in response to Runway’s (also recent) roll out of their camera control tools. Camera control in these video generators is the next step to having more command as the director of your AI-created videos. And understanding the optional parameters is essential to capitalizing on the full potential of your creative brain-babies. These parameters provide you with fine-grained control over your video prompts, so you can ensure your husband character stops doing backward leg-kicks that makes him look like a failed Irish line dancer.
Lord of the Dance he is not.
But let’s review these Pika parameters, shall we?
- Camera Parameter: -camera. Directs the camera movement in your clip, and is a vital tool for controlling the perspective of your video. It allows you to specify how the camera behaves within your scene.
- Zoom: You can instruct the camera to zoom in or out, bringing your audience closer to the subject or creating a wider view. This is an example of a zoom prompt: -camera zoom in.
- Pan: Panning directs the camera smoothly in a specific direction, such as up, down, left, or right. You can also combine directions for more complex movements. Example: `/create prompt: this is an example of a prompt -camera pan right`
- Rotate: Rotate lets the camera turn clockwise, counterclockwise, or anticlockwise, allowing for dynamic scene rotations. Example: -camera rotate clockwise
In addition to the Camera Control Feature, Pika Labs has also rolled out other updates to enhance ye olde video generating experience. Notably, they have set the default frame rate (-fps) to 24 frames per second, aligning with industry standards.
Why does this matter, you ask? The default frame rate of 24 frames per second ensures that your AI-generated videos maintain a professional and cinematic quality. So here are the parameters:
- Frames Per Second Parameter: -fps The Frames Per Second parameter determines the number of frames displayed per second in your video. A higher FPS creates smoother and more fluid animations.
- Range: You can set the FPS within a range of 8 to 24, with the default being 24. Example: -fps 16
- The Motion parameter, -motion, lets you control the intensity of motion in your video. It influences how animated elements within your scene move.
- Strength: You can specify a value between 0 to 4, with 0 indicating minimal motion and 4 indicating intense motion. The default value is 1. Example: -motion 3
- The Guidance Scale parameter, represented by -gs, influences how closely the generated content adheres to your text input, particularly in terms of relevance to the text. Pika advises to use values between 8 to 24, with 12 being the default. Example: prompt -gs 16
- The Negative Prompt parameter, specified by -neg followed by specific words, is supposed to exclude certain elements or concepts from your generated video, which helps in avoiding unwanted content. You can include words that you want to avoid in your video, like -neg leg motion
- The Aspect Ratio parameter, -ar, is all about the width-to-height ratio of your video. This parameter defines the video’s overall shape and dimensions. You can specify aspect ratios like 16:9, 9:16, 1:1, or 4:5, with 1024:576 being the default. Example: -ar 16:9
- The Seed parameter, -seed, goes back to consistency again, this time in video generation. By providing a seed number, you can replicate the same results as long as the prompt and negative prompt remain unchanged. It’s important to note that the seed number of a generated video can be found at the end of its file name, making it easy to track and reproduce specific results. Example: -seed 123456789
Making Your Clips Longer
Creating longer and consistent videos in Pika requires a strategic approach that involves stitching together multiple clips so that your video maintains a consistent flow.
To begin, you’ll need the final frame of each clip. Here’s how to grab it:
- Access the website finalframe.net.
- Click on “Add Video” and upload your video clip.
- Select the “Final Frame” option and click “Download this frame.”
With the final frames in hand, you can proceed to create a longer video in Pika:
- Open Pika and type
/animate
. - Paste the previous prompt (you can make minor adjustments if necessary).
- Example:
/animate prompt: wife greets husband, husband kisses wife, wife's hair blows gently
- Add the Final Frame Image:
- Submit the prompt.
Rinse and Repeat until your scene is the length you want
After collecting and adding the clips, it’s essential to edit them together for a seamless viewing experience.
- Overlap Technique: To mask any abrupt transitions between clips, apply a slight overlap when editing. This technique ensures a smooth and consistent flow from one clip to the next.
I’m not sure I actually used that technique here, but do what I say and not what I do is the lesson. By following the steps above, you can make your videos longer and maintain a consistent visual narrative in Pika.
Adding Music & Editing
Let’s keep it real. Right now, as I’m writing, I’m losing steam. I don’t want to brag or anything, but I’m kind of known for never finishing what I start. Y’know how I said the title is something that I can ride the hype train on for the first bit? I need more hype. I need music.
What kind of music fits the style of your short? For W R A I T H, I want distorted kind of Paul Anka-esque vibes. Heading on over to Uppbeat, a stockpile of awesome music and an AI playlist tool that can build you a list of tracks just from a description of your scene. Talk about time-saving. Type in my search, and voila!
All board the hype train, let’s get back to work.
Actually, brain baby. I wonder if I can follow the horror dips in the music to match the clips. Well, reader, get ready to find out with me if we can put this off.
First, I’m going to load the music into Filmora (editing software used by most of Pink Horn) and have another listen. Then, I’m gonna cut out a good chunk of the beginning. This is a shorter clip, so I want the crescendo into horror without all the build up. After that, I’m going to see if I can split the clips to guide myself with future clip numbers needed and overall length.
Basically in Filmora, or your choice of editing sofatware, be it Adobe Premier or DaVinci Resolve, or any of the others (you can try Clipchamp for free), you’re going to stitch all of your components together to make your finished horror short. In a program like Filmora, this is a pretty intuitive process and the learning curve is not a monster. Filmora 12 also has been adding a lot of cool new AI features, like its Smart Cutout tool, which really amplifies its abilities as this stage. It’s still going to be a process, requiring human trial and error.
AI is an amazing tool, but it is just that. A tool. If you can find a point of harmony between creator and tool, you’ll create masterpieces.
My Pika scene is 3 seconds so, I’m going to trim up the beginning of the song. I feel the clip will run smoother with 2 seconds of music shaved off instead of another 2 second clip. It’ll get choppy. I’ll use a heavier fade in for the music in Filmora, so it doesn’t sound so terribly abrupt. This is exactly why I don’t plan out videos. I get ideas, and AI guides me through to create something from the mere idea it was before. Chaos, darlings!
If you storyboard in your brain, you don’t have to erase anything.
Chaos, my doveys, every aspect of your life needs chaos, including Midjourney and storyboarding and music and games and your finances and your job and your car and your khakis and your car’s extended warranty AND STAY TUNED FOR MUCH MUCH MORE FROM THE CREATORS OF SHAMWOW. Ahem.
I’ve changed a few of the audio clips and generated the conversation between the officer and the wife through ElevenLabs (more on that in a different article). If you have trouble getting your character to match the tone you are looking for, I recommend lowering the stability of the voice. This makes regenerations of your character speak in different tones and can make the voice more expressive, rather than having elevenlabs keep the voice consistent. The downfall in this is that it can lead to some instability with your chosen voice. I’ve found in many of my projects that the benefit outweighs the risk.
Now I have the audio lined up with the music breaks and input my finished clips as well as a few sound effects. All that is left is prompting the clips for the rest of the story.
Once we have that, iiiiiiiiit’s Showtime!
And that’s how I do it with Pika Labs. I know there are and million and three other ways to make videos with AI, we have a few more articles around here on how to do it, and I would bet you a couple mill that there is definitely an easier way to do it than my method. You just have to use the wonderful tools AI gives us and run with your own flavor of chaos.
Until then, stay thirsty, my friends. See you in the next blog.
Pingback: Is LimeWire Still a Thing? Introducing LimeWire AI Studio - Pink Horn