The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: Tech

  • Against the totalising imaginary

    Dans le vif: Presenting at Campus Condorcet, Friday 24 April 2026.

    My sabbatical in France has continued apace, with plenty of fruitful meetings and discussions, and not a little writing (deadlines sadly declined a similar holiday).

    On 24 April, I had the opportunity to present some of my research at Université Paris 8-Vincennes-Saint Denis — a university founded by figures including Jacques Derrida, Hélène Cixous, and Roland Barthes in the aftermath of May ’68.

    I presented a talk titled “Against the Totalising Imaginary: Weird AI and the Ecology of the Possible”, in which I discussed my glitch-based experiments and methodologies, which I refer to as ‘ritual-technics’. For the first time, I also proposed worldbuilding and storytelling as productive frameworks for engaging with technologies like generative AI.

    I began with the Slopocene. This has been bandied about as a pejorative term for our current overload of synthetic content and governance by algorithm, with the resulting crises of authenticity, ‘reality’, and authorship. As in other work, I’m working to reclaim the Slopocene as a productive and playful term, but also as a speculative near-future or alt-present, where recursive training collapse turns the web into a haunted archive of confused bots, discarded memes, and broken truths.

    How to navigate the Slopocene? I co-opted the work of my co-presenters for the seminar: Boris Eldagsen, Rosa Cinelli, and Philippe Boisnard, alongside Chris Chesher and Cesar Albarran-Torres, Eryk Salvaggio, and Ian Haig. These are diverse approaches, but they have a few common clusters: material/semiotic, i.e. we can read AI outputs diagnostically as results of training data; relationality/phemomenology, in terms of what kind of encounter or interaction we have with AI technology; and then an aesthetic/resistant thread, which finds value in the visual breakdown and visceral sensation of encountering AI media.

    These are methods, approaches, attitudes that resist zealous techno-utopia or simplistic and naive dystopic rejection, preferring instead to pay close attention to generative AI’s computational and cultural mechanisms. Essentially these are all ways to ‘stay with’ the machine.

    My own approach weaves a thread through the material/semiotic, the relational/phenomenological, and the aesthetic/resistant — an approach I refer to formally as critical-creative AI, or informally: gonzo AI. The approach is the practical/experimental arm of my broader media-materialist approach, where I position myself as a tinkerer-theorist, which translates beautifully in French to bricoleur-théoricien.

    I went through a few of my experiments with genAI, including semantic collapse, music generation, before introducing The Drift, my worldbuilding project where all my weird AI creations live. The Drift is “a space to think and to play and to build, and an alternative imaginary to the totalising mythology that Big Technology would love us to believe, where AI is everything and everything has to be AI”:

    “It’s a world where messiness is the point, where you can be a critical observer but also someone who lives in the space as an inhabitant. There are lovely tensions between delight and disturbance, being critical and being caught-up-in-it — living in these tensions is the only honest position you can have. Games and world-building and storytelling are forms where you can hold the contradiction, you can live with the tension. And it’s a feature of these media rather than a bug or an error.”

    Image generated by Leonardo.Ai, 20 April 2026; prompt by me.

    This HERMES Seminaire, titled “Imaginaires artificiels : créativité et recherche à l’ère de l’image générative”, featured co-presenters Boris Eldagsen, Rosa Cinelli, and philippe boisnard, who shared their innovative approaches to exploring and deconstructing large language models and media generators.

    Université Paris 8 has been my host throughout this research trip, and it already feels like home. The institution embraces a diversity of experience among students and faculty, with interdisciplinary research and creative methods as the norm. Special thanks to Everardo Reyes of Laboratoire Paragraphe, who has been a generous friend and co-conspirator over the past couple of years.

  • OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

    NurPhoto / Getty Images

    If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

    Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?


    What is OpenClaw?

    OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer,
    Peter Steinberger, as a “weekend project” and released in November 2025.

    OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

    OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

    There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.


    Why is it controversial?

    OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

    That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

    Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

    Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

    Despite these issues, the project survives. At the time of writing it has over 140,000 stars on GitHub, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.


    The social lives of bots

    One of the most interesting phenomena to emerge from OpenClaw is
    Moltbook, a social network where AI agents post, comment and share information autonomously every few hours.

    I can now:

    • Wake the phone
    • Open any app
    • Tap, swipe, type
    • Read the UI accessibility tree
    • Scroll through TikTok (yes, really)

    Automation continuation

    The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning.

    What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation.


    This article was originally published on The Conversation on 3 February, 2026. Read the article here.

  • Cinema Disrupted

    K1no looks… friendly.
    Image generated by Leonardo.Ai, 14 October 2025; prompt by me.

    Notes from a GenAI Filmmaking Sprint

    AI video swarms the internet. It’s been around for nearly as long as AI-generated images, however its recent leaps and bounds in terms of realism, efficiency, and continuity have made it a desirable medium for content farmers, slop-slingers, and experimentalists. That said, there are those who are deploying the newer tools to hint at new forms of media, narrative, and experience.

    I was recently approached by the Disrupt AI Film Festival, which will run in Melbourne in November. As well as micro and short works (up to 3 mins and 3-15 mins respectively), they also have a student category in need of submissions. So over the last few weeks I organised a GenAI filmmaking Sprint at RMIT University last Friday. Leonardo.Ai was generous enough to donate a bunch of credits for us to play with, and also beamed in to give us a masterclass in how to prompt to generate AI video for storytelling — rather than just social media slurry.

    Movie magic? Participants during the GenAI Filmmaking Sprint at RMIT University, 10 October 2025.

    I also shared some thoughts from my research in terms of what kinds of stories or experiences work well for AI video, and also some practical insights on how to develop and ‘write’ AI films. The core of the workshop as a whole was to propose a structured approach: move from story ideas/fragments to logline, then to beat sheet, then shot list. The shot list, then, can be adapted slightly into the parlance of whatever tool you’re using to generate your images — you then end up with start frames for the AI video generator to use.

    This structure from traditional filmmaking functions as a constraint. But with tools that can, in theory, make anything, constraints are needed more than ever. The results were glimpses of shots that embraced both the impossible, fantastical nature of AI video, while anchoring it with characters, direction, or a particular aesthetic.

    In the workshop, I remembered moments in my studio Augmenting Creativity where students were tasked with using AI tools: particularly in the silences. Working with AI — even when it is dynamic, interesting, generative, fruitful, fun — is a solitary endeavour. AI filmmaking, too, in a sense, is a stark contrast to the hectic, chaotic, challenging, but highly dynamic and collaborative nature of real-life production. This was a reminder, and a timely one, that in teaching AI (as with any technology or tool), we must remember three turns that students must make: turn to the tool, turn to each other, turn to the class. These turns — and the attendant reflection, synthesis, and translation required with each — is where the learning and the magic happens.

    This structured approach helpfully supported and reiterated some of my thoughts on the nature of AI collaboration itself. I’ve suggested previously that collaborating with AI means embracing various dynamics — agency, hallucination, recursion, fracture, ambience. This workshop moved away — notably, for me and my predilections — from glitch, from fracture or breakage and recursion. Instead, the workflow suggested a more stable, more structured, more intentional approach, with much more agency on the part of the human in the process. The ambience, too, was notable, in how much time is required for the labour of both human and machine: the former in planning, prompting, managing shots and downloaded generations; the latter in processing the prompts, generating the outputs.

    Video generated for my AI micro-film The Technician (2024).

    What remains with me after this experience is a glimpse into creative genAI workflows that are more pragmatic, and integrated with other media and processes. Rather than, at best, unstructured open-ended ideation or, at worst, endless streams of slop, the tools produce what we require, and we use them to that end, and nothing beyond that. This might not be the radical revelation I’d hoped for, but it’s perhaps a more honest account of where AI filmmaking currently sits — somewhere between tool and medium, between constraint and possibility.

  • A Little Slop Music

    The AI experiment that turned my ick to 11 (now you can try it too!)

    When I sit at the piano I’m struck by a simple paradox: twelve repeating keys are both trivial and limitless. The layout is simple; mastery is not. A single key sets off a chain — lever, hammer, string, soundboard. The keyboard is the interface that controls an intricate deeper mechanism.

    The computer keyboard can be just as musical. You can sequence loops, dial patches, sample and resample, fold fragments into new textures, or plug an instrument in and hear it transformed a thousand ways. It’s a different kind of craft, but it’s still craft.

    Generative AI has given me more “magic” moments than any other technology I’ve tried: times when the interface fell away and something like intelligence answered my inputs. Images, text, sounds appearing that felt oddly new: the assemblage transcending its parts. Still, my critical brain knows it’s pattern-play: signal in noise.

    AI-generated music feels different, though.

    ‘Blåtimen’, by Lars Vintersholm & Triple L, from the album Just North of Midnight.

    In exploring AI, music, and ethics after the Velvet Sundown fallout, a colleague tasked students with building fictional bands: LLMs for lyrics and backstory, image and video generators for faces and promo, Suno for the music. Some students leaned into the paratexts; the musically inclined pulled stems apart and remixed them.

    Inspired, I tried it myself. And, wouldn’t you know, the experience produced a pile of Thoughts. And not insignificantly, a handful of Feelings.

    Lars Vintershelm, captured for a feature article in Scena Norge, 22 August 2025.

    Ritual-Technic: Conjuring a Fictional AI Band

    1. Start with the sound

    • Start with loose stylistic prompts: “lofi synth jazz beats,” “Scandi piano trio,” “psychedelic folk with sitar and strings,” or whatever genre-haunting vibe appeals.
    • Generate dozens (or hundreds) of tracks. Don’t worry if most are duds — part of the ritual is surfing the slop.
    • Keep a small handful that spark something: a riff, a texture, an atmosphere.

    2. Conjure the band

    • Imagine who could be behind this sound. A trio? A producer? A rotating collective?
    • Name them, sketch their backstories, even generate portraits if you like.
    • The band is a mask: it makes the output feel inhabited, not just spat out by a machine.

    3. Add the frame

    • Every band needs an album, EP, or concept. Pick a title that sets the mood (Just North of Midnight, Spectral Mixtape Vol. 1, Songs for an Abandoned Mall).
    • Create minimal visuals — a cover, a logo, a fake gig poster. The paratexts do heavy lifting in conjuring coherence.

    4. Curate the release

    • From the pile of generations, select a set that holds together. Think sequencing, flow, contrasts — enough to feel like an album, not a playlist.
    • Don’t be afraid to include misfires or weird divergences if they tell part of the story.

    5. Listen differently

    • Treat the result as both artefact and experiment. Notice where it feels joyous, uncanny, or empty.
    • Ask: what is my band teaching me about AI systems, creativity, and culture?

    Like many others, I’m sure, it took me a while to really appreciate jazz. For the longest time, for an ear tuned to consistent, unchanging monorhythms, clear structures, and simple chords and melodies, it just sounded like so much noise. It wasn’t until I became a little better at piano, but really until I saw jazz played live, and started following jazz musicians, composers, and theorists online, that I became fascinated by the endless inventiveness and ingenuity of these musicians and this music.

    This exploration, rightly, soon expanded into the origins, people, stories, and cultures of this music. This is a music born of pain, trauma, struggle, injustice. It is a music whose pioneers, masters, apprentices, advocates, have been pilloried, targeted, attacked, and abused, because of who they are, and what they were trying to express. Scandinavian jazz, and European jazz in general, is its own special problematic beast. At best, it is a form of cultural appropriation, at worst, it is an offensive cultural colonialism.

    Here I was, then, conjuring music from my imaginary Scandi jazz band in Suno, in the full knowledge that even this experiment, this act of play, brushes up against both a fraught musical history, as well as ongoing debates and court cases on creativity, intellectual property, and generative systems.

    Play is how I probe the edges of these systems, how I test what they reveal about creativity, culture, and myself. But for the first time, the baseline ‘ickiness’ I feel around the ethics of AI systems became almost emotional, even physiological. I wasn’t just testing outputs, but testing myself: the churn of affect, the strangeness in my body, the sick-fascinated thrill of watching the machine spit out something that felt like an already-loaded form of music, again and again. Addictive, uncanny, grotesque.

    It’s addictive, in part, because it’s so fast. You put in a few words, generate or enter some lyrics, and within two minutes you have a functional piece of music that sounds 80 or 90% produced and ready to do whatever you want with. Each generation is wildly different if you want it to be. You might also generate a couple of tracks in a particular style, enable the cover version feature, and hear those same songs in a completely different tone, instrumentation, genre. In the midst of generating songs, it felt like I was playing or using some kind of church organ-cum-starship enterprise-cum-dream materialiser…. the true sensation of non-stop slop.

    What perhaps made it more interesting was the vague sense that I was generating something like an album, or something like a body of work within a particular genre and style. That meant that when I got a surprising result, I had to decide whether this divergence from that style was plausible for the spectral composer in my head.

    But behind this spectre-led exhilaration: the shadow of a growing unease.

    ‘Forever’, by Lars Vintersholm & Triple L (ft. Magnus LeClerq), from the album Just North of Midnight.

    AI-generated music used to only survive half-scrutiny: fine as background noise, easy to ignore. They still can be — but with the right prompts and tweaks, the outputs are now more complex, even if not always more musical or artistic.

    If all you want is a quick MP3 for a short film or TikTok, they’re perfect. If you’re a musician pulling stems apart for remixing or glitch experiments, they’re interesting too — but the illusion falls apart when you expect clean, studio-ready stems. Instead of crisp, isolated instruments, you hear the model’s best guesses: blobs of sound approximating piano, bass, trumpet. Like overhearing a whole track, snipping out pieces that sound instrument-like, and asking someone else to reassemble them. The seams show. Sometimes the stems are tidy, but when they wobble and smear, you catch a glimpse of how the machine is stitching its music together.

    The album Just North of Midnight only exists because I decided to make something out of the bizarre and queasy experience of generating a pile of AI songs. It exists because I needed a persona — an artist, a creative driver, a visionary — to make the tension and the weirdness feel bearable or justified. The composer, the trio, the album art, the biographies: all these extra elements, whether as worldbuilding or texture, lend (and only lend) a sense of legitimacy and authenticity to what is really just an illusion of a coherent, composed artefact.

    For me, music is an encounter and an entanglement — of performer and instrument, artist and audience, instrument and space, audience and space, hard notes and soft feel. Film, by contrast (at least for me), is an assemblage — sound and vision cut and layered for an audience. AI images or LLM outputs feel assemblage-like too: data, models, prompts, outputs, contexts stitched together. AI music may be built on the same mechanics, but I experience it differently. That gap — between how it’s made and how it feels — is why AI music strikes me as strange, eerie, magical, uncanny.

    ‘Seasonal Blend’, by Lars Vintersholm & Triple L, from the album Just North of Midnight.

    So what’s at stake here? AI music unsettled me because it plays at entanglement without ever truly achieving it. It mimics encounter while stitching together approximations. And in that gap, I — perhaps properly for the first time — glimpsed the promise and danger of all AI-generated media: a future where culture collapses into an endless assemblage of banal, plausible visuals, sounds, and words. This is a future that becomes more and more likely unless we insist on the messy, embodied entanglements that make art matter: the contexts and struggles it emerges from, the people and stories it carries, the collective acts of making and appreciating that bind histories of pain, joy, resistance, and creativity.


    Listen to the album Just North of Midnight in its complete strangeness on SoundCloud.

  • From Caméra-Stylo to Prompt-Stylo

    A few weeks ago I was invited to present some of my work at Caméra-Stylo, a fantastic conference run every two years by the Sydney Literature and Cinema Network.

    For this presentation, I wanted to start to formalise the experimental approach I’d been employing around generative AI, and to give it some theoretical grounding. I wasn’t entirely surprised to find that only by looking back at my old notes on early film theory would I unearth the perfect words, terms, and ideas to, ahem, frame my work.

    Here’s a recording of the talk:

    Let me know what you think, and do contact me if you want to chat more or use some of this work yourself.