How It Feels to Be a Cincinnatian
Generative AI Short Film | Created for Cincy AI for Humans (2025)
Generative AI Short Film | Created for Cincy AI for Humans (2025)
Overview
This short was created the morning of my presentation at Cincy AI Week as a live demo of what’s possible using the open-source launcher Pinokio AI. My goal: make something from scratch in 2–3 hours, lean into humor, and reflect the everyday vibe of being a Cincinnati resident — all through the lens of generative video tools.
This short was created the morning of my presentation at Cincy AI Week as a live demo of what’s possible using the open-source launcher Pinokio AI. My goal: make something from scratch in 2–3 hours, lean into humor, and reflect the everyday vibe of being a Cincinnati resident — all through the lens of generative video tools.
The result is a satirical, slightly absurd AI-generated film that had the audience laughing mid-presentation. It was also a personal experiment in dropping perfectionism: I finished the final edit 15 minutes before heading to the event.
Process + Tools
• Script: Written manually (morning of the presentation)
• Video Generation: Entirely created in WAN 2.1 via Pinokio
• Voiceover: AI TTS narration based on my script
• Face Refinement: Used Roop to deepfake my face onto my own face — correcting WAN’s render glitches and preserving consistency
• Editing: Finalized in Adobe Premiere Pro
Audience Response
One of the biggest reactions came from the line: “I like to come home and melt into my couch” — timed with a warped deflate effect on my AI-generated body. The imperfections actually added charm. The video resonated because it was relatable, local, and clearly made under pressure — which helped people connect with the creative process behind it.
One of the biggest reactions came from the line: “I like to come home and melt into my couch” — timed with a warped deflate effect on my AI-generated body. The imperfections actually added charm. The video resonated because it was relatable, local, and clearly made under pressure — which helped people connect with the creative process behind it.
Tools Used
WAN 2.1, Roop (via Pinokio AI), AI voice generation, Adobe Premiere Pro
WAN 2.1, Roop (via Pinokio AI), AI voice generation, Adobe Premiere Pro
Generative AI Video: "Rachel and Leah" - Production Workflow
Project Overview:
This project demonstrates a comprehensive application of generative AI tools to produce a narrative-driven video adaptation of the biblical story of Rachel and Leah. Every stage of the creative process, from script development to final rendering, leveraged AI capabilities.
Production Workflow:
1. Script Development and Scene Breakdown:
• Utilized ChatGPT to create a contemporary adaptation of the Rachel and Leah narrative.
• Instructed ChatGPT to delineate the script into distinct scenes, providing detailed visual prompts and corresponding narration for each.
2. Image Generation:
• Employed DALL-E 3 to generate high-fidelity images for each scene, based on the visual prompts provided by ChatGPT.
3. Video Synthesis:
• Utilized Minimax to transform the generated images into a cohesive video sequence.
4. Facial Consistency and Deepfake Refinement:
• Addressed potential inconsistencies in character facial features introduced during the video synthesis process.
• Implemented FaceFusion AI to apply deepfake techniques, ensuring consistent and accurate facial representations throughout the video.
4. Voiceover Narration:
• Leveraged ElevenLabs to generate realistic and nuanced voiceover narration based on the script provided by ChatGPT.
5. Ambient Audio Design:
• Integrated ambient sound effects (e.g., rain, applause) using MMAudio AI to enhance the atmospheric quality of specific scenes.
7. Subtitle Generation:
• Employed Whisper AI to generate precisely timed subtitles from the finalized voiceover narration, ensuring accessibility and clarity.
8. Final Editing and Rendering:
• Integrated all generated assets (video, audio, subtitles) within a video editing environment.
• Overlaid the generated subtitles onto the video timeline for final rendering, producing a polished and comprehensive video product.
Generative AI Video: "Rachel and Leah" - Production Workflow
Project Overview:
This project demonstrates a comprehensive application of generative AI tools to produce a narrative-driven video adaptation of the biblical story of Rachel and Leah. Every stage of the creative process, from script development to final rendering, leveraged AI capabilities.
Production Workflow:
1. Script Development and Scene Breakdown:
• Utilized ChatGPT to create an overall script and narration for the video.
2. Image Generation:
• Employed DALL-E 3 to generate high-fidelity images for each scene, based on the visual prompts provided by ChatGPT.
3. Video Synthesis:
• Utilized Minimax to transform the generated images into a cohesive video sequence.
4. Facial Consistency and Deepfake Refinement:
• Addressed potential inconsistencies in character facial features introduced during the video synthesis process.
• Implemented FaceFusion AI to apply deepfake techniques, ensuring consistent and accurate facial representations throughout the video.
4. Voiceover Narration:
• Leveraged ElevenLabs to generate realistic and nuanced voiceover narration based on the script provided by ChatGPT.
5. Final Editing and Rendering:
• Integrated all generated assets (video, audio, subtitles) within a video editing environment.
• Overlaid the generated subtitles onto the video timeline for final rendering, producing a polished and comprehensive video product.