The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: AI

  • Against the totalising imaginary

    Dans le vif: Presenting at Campus Condorcet, Friday 24 April 2026.

    My sabbatical in France has continued apace, with plenty of fruitful meetings and discussions, and not a little writing (deadlines sadly declined a similar holiday).

    On 24 April, I had the opportunity to present some of my research at Université Paris 8-Vincennes-Saint Denis — a university founded by figures including Jacques Derrida, Hélène Cixous, and Roland Barthes in the aftermath of May ’68.

    I presented a talk titled “Against the Totalising Imaginary: Weird AI and the Ecology of the Possible”, in which I discussed my glitch-based experiments and methodologies, which I refer to as ‘ritual-technics’. For the first time, I also proposed worldbuilding and storytelling as productive frameworks for engaging with technologies like generative AI.

    I began with the Slopocene. This has been bandied about as a pejorative term for our current overload of synthetic content and governance by algorithm, with the resulting crises of authenticity, ‘reality’, and authorship. As in other work, I’m working to reclaim the Slopocene as a productive and playful term, but also as a speculative near-future or alt-present, where recursive training collapse turns the web into a haunted archive of confused bots, discarded memes, and broken truths.

    How to navigate the Slopocene? I co-opted the work of my co-presenters for the seminar: Boris Eldagsen, Rosa Cinelli, and Philippe Boisnard, alongside Chris Chesher and Cesar Albarran-Torres, Eryk Salvaggio, and Ian Haig. These are diverse approaches, but they have a few common clusters: material/semiotic, i.e. we can read AI outputs diagnostically as results of training data; relationality/phemomenology, in terms of what kind of encounter or interaction we have with AI technology; and then an aesthetic/resistant thread, which finds value in the visual breakdown and visceral sensation of encountering AI media.

    These are methods, approaches, attitudes that resist zealous techno-utopia or simplistic and naive dystopic rejection, preferring instead to pay close attention to generative AI’s computational and cultural mechanisms. Essentially these are all ways to ‘stay with’ the machine.

    My own approach weaves a thread through the material/semiotic, the relational/phenomenological, and the aesthetic/resistant — an approach I refer to formally as critical-creative AI, or informally: gonzo AI. The approach is the practical/experimental arm of my broader media-materialist approach, where I position myself as a tinkerer-theorist, which translates beautifully in French to bricoleur-théoricien.

    I went through a few of my experiments with genAI, including semantic collapse, music generation, before introducing The Drift, my worldbuilding project where all my weird AI creations live. The Drift is “a space to think and to play and to build, and an alternative imaginary to the totalising mythology that Big Technology would love us to believe, where AI is everything and everything has to be AI”:

    “It’s a world where messiness is the point, where you can be a critical observer but also someone who lives in the space as an inhabitant. There are lovely tensions between delight and disturbance, being critical and being caught-up-in-it — living in these tensions is the only honest position you can have. Games and world-building and storytelling are forms where you can hold the contradiction, you can live with the tension. And it’s a feature of these media rather than a bug or an error.”

    Image generated by Leonardo.Ai, 20 April 2026; prompt by me.

    This HERMES Seminaire, titled “Imaginaires artificiels : créativité et recherche à l’ère de l’image générative”, featured co-presenters Boris Eldagsen, Rosa Cinelli, and philippe boisnard, who shared their innovative approaches to exploring and deconstructing large language models and media generators.

    Université Paris 8 has been my host throughout this research trip, and it already feels like home. The institution embraces a diversity of experience among students and faculty, with interdisciplinary research and creative methods as the norm. Special thanks to Everardo Reyes of Laboratoire Paragraphe, who has been a generous friend and co-conspirator over the past couple of years.

  • New research published: A media-materialist method for interpreting generative AI images

    One of the images I used in this article as a sample object of analysis. Generated in Midjourney using the prompt ‘intellectual rigor’. Perfectly reflects my state at various stages of this article’s composition and publication.

    After plenty of play and experimentation with AI imagery, I found myself reacting viscerally to commentary and early scholarship that was pejorative about — or outright dismissive of — these outputs. The prevailing discourse treated AI images as a kind of slop monolith, when I found a lot of my generations to be fascinating, disturbing, amusing, and even beautiful. In response, I wrote this article, which presents a four-layer method for a structured, formal analysis of AI-generated images. The four layers are data, model, interface, and prompt, reflecting the mechanisms of generative AI technology. Each layer offers various considerations and questions to ask about actual outputs, encouraging researchers, students, educators, and commentators to move beyond dismissing these images as mere slop, and to begin considering them as cultural artefacts.

    This piece is the foundation of all my work on genAI over the past two years (I hinted at its publication last year), and also the first where I’ve attempted to create a new method rather than just apply one. It’s also the first to really put forward my own take on media materialism, a philosophy and methodology that has guided my work for nearly ten years.

    I am a big believer in close analysis, be it of texts, imagery, video, films: all the objects of culture. But I struggled for a long time to bridge that method with a context that made sense to me. In figuring out that the mechanisms of making were another foundational aspect of my work, it took me a few pieces to be able to make this connection, i.e. what I’ve nearly always tried to do is to consider how the means of an object’s production leave their mark on the object itself. It’s a simple conclusion, but it’s taken several attempts for me to articulate it in a way that felt satisfactory. This article feels like the first to actually explain it appropriately; the next step is to deploy the approach across other kinds of synthetic media and generative systems more broadly, but also to possibly return with this approach to cinema and TV.

  • OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

    NurPhoto / Getty Images

    If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

    Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?


    What is OpenClaw?

    OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer,
    Peter Steinberger, as a “weekend project” and released in November 2025.

    OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

    OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

    There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.


    Why is it controversial?

    OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

    That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

    Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

    Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

    Despite these issues, the project survives. At the time of writing it has over 140,000 stars on GitHub, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.


    The social lives of bots

    One of the most interesting phenomena to emerge from OpenClaw is
    Moltbook, a social network where AI agents post, comment and share information autonomously every few hours.

    I can now:

    • Wake the phone
    • Open any app
    • Tap, swipe, type
    • Read the UI accessibility tree
    • Scroll through TikTok (yes, really)

    Automation continuation

    The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning.

    What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation.


    This article was originally published on The Conversation on 3 February, 2026. Read the article here.

  • On generativity, ritual-technics, and the genAI ick

    Image generated by Leonardo.Ai, 6 November 2025; prompt by me.

    My work on and with generative AI continues apace, but I’m presently in a bit of a reflection and consolidation phase. One of the notions that’s popped up or out or through is that of generativity. Definitely not a dictionary word, but it emerged from — of all places — psychoanalysis. Specifically, it was used by a German-American psychoanalyst and artist named Erik Erikson. Erikson’s primary research focus was psychosocial development, and ‘generativity’ was the term he applied to “the concern in establishing and guiding the next generation” (source: p. 267).

    My adoption of the term is in some ways adjacent, in the sense of a property of tools or systems that ‘help’ by generating choices, solutions, or possibilities. In this sense, generativity is also a practice and concept in and of itself. Generative artificial intelligence is, of course, one example of a technology possessing generativity, but I’ve also been thinking a lot about generative art (be it digital/code-based, or driven by analogue tools or naturally occurring randomness), generative design, procedural generation, mathematical/computational models of chance and probability, as well as lo-fi tools and processes: think dice, tarot cards, or roll tables in TTRPGs.

    The name I’ve given my repeatable genAI experiments is ‘ritual-technic‘. These are designed specifically as recipes for generativity (one example here). Primarily, this is to allow some kind of exploration or understanding of the technology’s capabilities or limitations. They may also produce content that is useful: research fodder to unpack or analyse, or glitchy outputs that I can remix creatively. But another potential output is a protocol for generativity itself. One the one hand, these protocols can be rich in terms of understanding how LLMs conceive of creativity, human action, and the ‘real’ world. But on the other, they push users off the model, and into a generative mode themselves. These protocols are a kind of genAI costume you can put on, to try out being a generative thing yourself.

    Another quality of the ritual-technic is that it will often test not just the machine, but the user. These are rituals, practices, bounded activities, that may occasion some strange feelings: uncertainty, confusion, delight, fear. These feelings shouldn’t be quashed or ignored, they should be observed, marked, noted, and tracked. Our subjective experience of using technology, particularly those like genAI that are opaque, complex, or ideologically-loaded, is the embodiment, the lived and felt experience, of our ethics and values. Many of my experiments have emerged as a way of learning about genAI in a way that feels engaging, relevant, and fun — yes! fun! what a concept! But as I’ve noted elsewhere, the feelings accompanying this work aren’t always comfortable. It’s always a reckoning: with my own creativity, capabilities, limitations, and with my willingness to accept assistance or outsource tasks to the unknown.

    For Erikson, generativity was about nurturing the future. I think mine is more about figuring out what future we’re in, or what future I want to shape for myself. Part of this is finding ways to understand the systems that are influencing the world around us, and part of it is deciding when to take control, to accept control, or when to let it go. Generativity is, at least in my definition and understanding, innately about ceding some kind of control. You might be handing one of the reins to a D6 or a card draw, to a writing prompt or a creative recipe, or to a machine. In so doing, you open yourself to chance, to the unexpected, to the chaos, where fun or fear are just a coin flip away.