The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: media production

  • Cinema Disrupted

    K1no looks… friendly.
    Image generated by Leonardo.Ai, 14 October 2025; prompt by me.

    Notes from a GenAI Filmmaking Sprint

    AI video swarms the internet. It’s been around for nearly as long as AI-generated images, however its recent leaps and bounds in terms of realism, efficiency, and continuity have made it a desirable medium for content farmers, slop-slingers, and experimentalists. That said, there are those who are deploying the newer tools to hint at new forms of media, narrative, and experience.

    I was recently approached by the Disrupt AI Film Festival, which will run in Melbourne in November. As well as micro and short works (up to 3 mins and 3-15 mins respectively), they also have a student category in need of submissions. So over the last few weeks I organised a GenAI filmmaking Sprint at RMIT University last Friday. Leonardo.Ai was generous enough to donate a bunch of credits for us to play with, and also beamed in to give us a masterclass in how to prompt to generate AI video for storytelling — rather than just social media slurry.

    Movie magic? Participants during the GenAI Filmmaking Sprint at RMIT University, 10 October 2025.

    I also shared some thoughts from my research in terms of what kinds of stories or experiences work well for AI video, and also some practical insights on how to develop and ‘write’ AI films. The core of the workshop as a whole was to propose a structured approach: move from story ideas/fragments to logline, then to beat sheet, then shot list. The shot list, then, can be adapted slightly into the parlance of whatever tool you’re using to generate your images — you then end up with start frames for the AI video generator to use.

    This structure from traditional filmmaking functions as a constraint. But with tools that can, in theory, make anything, constraints are needed more than ever. The results were glimpses of shots that embraced both the impossible, fantastical nature of AI video, while anchoring it with characters, direction, or a particular aesthetic.

    In the workshop, I remembered moments in my studio Augmenting Creativity where students were tasked with using AI tools: particularly in the silences. Working with AI — even when it is dynamic, interesting, generative, fruitful, fun — is a solitary endeavour. AI filmmaking, too, in a sense, is a stark contrast to the hectic, chaotic, challenging, but highly dynamic and collaborative nature of real-life production. This was a reminder, and a timely one, that in teaching AI (as with any technology or tool), we must remember three turns that students must make: turn to the tool, turn to each other, turn to the class. These turns — and the attendant reflection, synthesis, and translation required with each — is where the learning and the magic happens.

    This structured approach helpfully supported and reiterated some of my thoughts on the nature of AI collaboration itself. I’ve suggested previously that collaborating with AI means embracing various dynamics — agency, hallucination, recursion, fracture, ambience. This workshop moved away — notably, for me and my predilections — from glitch, from fracture or breakage and recursion. Instead, the workflow suggested a more stable, more structured, more intentional approach, with much more agency on the part of the human in the process. The ambience, too, was notable, in how much time is required for the labour of both human and machine: the former in planning, prompting, managing shots and downloaded generations; the latter in processing the prompts, generating the outputs.

    Video generated for my AI micro-film The Technician (2024).

    What remains with me after this experience is a glimpse into creative genAI workflows that are more pragmatic, and integrated with other media and processes. Rather than, at best, unstructured open-ended ideation or, at worst, endless streams of slop, the tools produce what we require, and we use them to that end, and nothing beyond that. This might not be the radical revelation I’d hoped for, but it’s perhaps a more honest account of where AI filmmaking currently sits — somewhere between tool and medium, between constraint and possibility.

  • Generatainment 101

    generated using Leonardo.Ai

    In putting together a few bits and bobs for academic work on generative AI and creativity, I’m poking around in all sorts of strange places, where all manner of undead monsters lurk.

    The notion of AI-generated entertainment is not a new one, but the first recent start-up I found in the space was Hypercinema. The copy on the website is typically vague, but I think the company is attempting to build apps for sites like stores, museums and theme parks that add visitors into virtual experiences or branded narratives.

    After noodling about on Hypercinema’s LinkedIn and X pages, it wasn’t long before I then found Fable Studios and their Showrunner project; from there it was but a hop, skip and a jump to Showrunner’s parent concept, The Simulation.

    Sim Francisco; what I’m assuming is an artist’s rendition. Sourced from The Simulation on X.

    The Simulation is a project being developed by Fable Studios, a group of techies and storytellers who are interested in a seamless blend of their respective knowledges. To quote their recent announcement: “We believe the future is a mix of game & movie. Simulations powering 1000s of Truman Shows populated by interactive AI characters.” I realise this is still all guff. From what I can tell, The Simulation is a sandbox virtual world populated by a huge variety of AI characters. The idea is that you can guide the AI characters, influencing their lives and decisions; you can then also zoom into a particular character or setting, then ask The Simulation to generate an ‘entertainment’ for you of a particular length, e.g. a 20-minute episode.

    In 2023, Fable Studios released a research paper on their initial work on ‘showrunner agents in multi-agent simulations’. To date, one of the largest issues with AI-generated narratives is that character and plot logics nearly always fall apart; the machine learning systems cannot keep track over prolonged story arcs. In conventional TV/film production, this sort of thing is the role of the director, often in conjunction with the continuity team and first assistant director. But genAI systems are by and large predictive content machines; they’ll examine the context of a given moment and then build the next moment from there, then repeat, then repeat. This process isn’t driven by ‘continuity’ in a traditional cinematic or even narrative sense, but by the cold logic of computation:

    “[A] computer running a program, if left powered up, can sit in a loop and run forever, never losing energy or enthusiasm. It’s a metamechanical machine that never experiences surface friction and is never subject to the forces of gravity like a real mechanical machine – so it runs in complete perfection.”

    John Maeda, How to Speak Machine, p3

    The ML system will repeat the same process over and over again, but note that it does not reframe its entire context from moment to moment, in the way that humans might. The ML system starts again with the next moment, then starts again. This is why generating video with ML tools is so difficult (at least, it still is at the time of writing).

    What if, though, you make a video game, with a set of characters with their own motivations and relationships, and you just let life continue, let characters grow, as per a set of rules? Many sandbox or simulation games can be described in this way. There are also some open-world role-playing games that play out against what feels like a simulated, continous world that exists with or without the player character. The player character, in this latter example, becomes the focaliser, the lens through which action is framed, or from which the narrative emerges. And in the case of simulators or city-builders, it’s the experience of planning out your little world, the embedding of your gameplay choices into the lives of virtual people (as either biography or extended history), that embodies the experience. What The Simulation proposes is similar to both these experiences, but at scale.

    A selection of apparently-upcoming offerings from Showrunner. I believe these are meant to have been generated in/by The Simulation? Sourced from The Simulation on X.

    Sim Francisco is the first megacity that The Simulation has built, and they’re presently working on Neo-Tokyo. These virtual cities are the storyworlds within which you can, supposedly, find your stories. AI creators can jump into these cities, find characters to influence, and then prompt another AI system to capture the ensuing narrative. Again, this is all wild speculation, and the specific mechanics, beyond a couple of vague in-experience clips, are a mystery.

    As is my wont, I’m ever reminded of precedents, not least of which were the types of games discussed above: SimCity, The Sims, The Movies, even back to the old classic Microsoft 3D Movie Maker, but also Skyrim, Grand Theft Auto, Cyberpunk 2077. All of these offer some kind of open-world sandbox element that allows the player to craft their own experience. Elements of these examples seem like they might almost be directly ported to The Simulation: influencing AI characters as in The Sims, or directing them specifically as in 3D Movie Maker? Maybe it’ll be a little less direct, where you simply arrange certain elements and watch the result, like in The Movies. But rather than just the resulting ‘entertainments’, will The Simulation allow users to embody player characters? That way they might then be able to interact with AI characters in single-player, or both AIs and other users in a kind of MMO experience (Fable considers The Simulation to be a kind of Westworld). If this kind of gameplay is combined with graphics like those we’re seeing out of the latest Unreal Engine, this could be Something Else.

    But then, isn’t this just another CyberTown? Another Second Life? Surely the same problems that plagued (sometimes continue to plague) those projects will recur here. And didn’t we just leave some of this nonsense behind us with web3? Even in the last few months, desperate experiments around extended realities have fallen flat; wholesale virtual worlds might not be the goût du moment, er, maintenant. But then, if the generative entertainment feature works well, and the audience becomes invested in their favourite little sim-characters, maybe it’ll kick off.

    It’s hard to know anything for sure without actually seeing the mechanics of it all. That said, the alpha of Showrunner is presently taking applications, so maybe a glimpse under the hood is more possible than it seems.

    Based on this snippet from a Claude-generated sitcom script, however, even knowing how it works never guarantees quality.

    Claude Burrows? I think not. Screenshot from Claude.Ai.

    Post-script: How the above was made

    With a nod to looking under the hood, and also documenting my genAI adventures as part of the initial research I mentioned, here’s how I reached the above script snippet from the never-to-be-produced Two Girls, A Guy, and a WeWork.

    Initial prompt to Claude:

    I have an idea for a sitcom starring three characters: two girls and a guy. One girl works a high-flying corporate job, the other girl has gone back to school to re-train for a new career after being fired. The guy runs a co-working space where the two girls often meet up: most of the sitcom's scenes take place here. What might some possible conflicts be for these characters? How might I develop these into episode plotlines?

    Of the resulting extended output, I selected this option to develop further:

    Conflict 6: An investor wants to partner with the guy and turn his co-working space into a chain, forcing him to choose between profits and the community vibe his friends love. The girls remind him what really matters.

    I liked the idea of a WeWork-esque storyline, and seeing how that might play out in this format and setting. I asked Claude for a plot outline for an episode, which was fine? I guess? Then asked it to generate a draft script for the scene between the workspace owner (one of our main characters) and the potential investor.

    To be fair to the machine, the quality isn’t awful, particularly by sitcom standards. And once I started thinking about sitcom regulars who might play certain characters, the dialogue seemed to make a little more sense, even if said actors would be near-impossible at best, and necromantic at worst.