The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Author: dan

  • Against the totalising imaginary

    Dans le vif: Presenting at Campus Condorcet, Friday 24 April 2026.

    My sabbatical in France has continued apace, with plenty of fruitful meetings and discussions, and not a little writing (deadlines sadly declined a similar holiday).

    On 24 April, I had the opportunity to present some of my research at Université Paris 8-Vincennes-Saint Denis — a university founded by figures including Jacques Derrida, Hélène Cixous, and Roland Barthes in the aftermath of May ’68.

    I presented a talk titled “Against the Totalising Imaginary: Weird AI and the Ecology of the Possible”, in which I discussed my glitch-based experiments and methodologies, which I refer to as ‘ritual-technics’. For the first time, I also proposed worldbuilding and storytelling as productive frameworks for engaging with technologies like generative AI.

    I began with the Slopocene. This has been bandied about as a pejorative term for our current overload of synthetic content and governance by algorithm, with the resulting crises of authenticity, ‘reality’, and authorship. As in other work, I’m working to reclaim the Slopocene as a productive and playful term, but also as a speculative near-future or alt-present, where recursive training collapse turns the web into a haunted archive of confused bots, discarded memes, and broken truths.

    How to navigate the Slopocene? I co-opted the work of my co-presenters for the seminar: Boris Eldagsen, Rosa Cinelli, and Philippe Boisnard, alongside Chris Chesher and Cesar Albarran-Torres, Eryk Salvaggio, and Ian Haig. These are diverse approaches, but they have a few common clusters: material/semiotic, i.e. we can read AI outputs diagnostically as results of training data; relationality/phemomenology, in terms of what kind of encounter or interaction we have with AI technology; and then an aesthetic/resistant thread, which finds value in the visual breakdown and visceral sensation of encountering AI media.

    These are methods, approaches, attitudes that resist zealous techno-utopia or simplistic and naive dystopic rejection, preferring instead to pay close attention to generative AI’s computational and cultural mechanisms. Essentially these are all ways to ‘stay with’ the machine.

    My own approach weaves a thread through the material/semiotic, the relational/phenomenological, and the aesthetic/resistant — an approach I refer to formally as critical-creative AI, or informally: gonzo AI. The approach is the practical/experimental arm of my broader media-materialist approach, where I position myself as a tinkerer-theorist, which translates beautifully in French to bricoleur-théoricien.

    I went through a few of my experiments with genAI, including semantic collapse, music generation, before introducing The Drift, my worldbuilding project where all my weird AI creations live. The Drift is “a space to think and to play and to build, and an alternative imaginary to the totalising mythology that Big Technology would love us to believe, where AI is everything and everything has to be AI”:

    “It’s a world where messiness is the point, where you can be a critical observer but also someone who lives in the space as an inhabitant. There are lovely tensions between delight and disturbance, being critical and being caught-up-in-it — living in these tensions is the only honest position you can have. Games and world-building and storytelling are forms where you can hold the contradiction, you can live with the tension. And it’s a feature of these media rather than a bug or an error.”

    Image generated by Leonardo.Ai, 20 April 2026; prompt by me.

    This HERMES Seminaire, titled “Imaginaires artificiels : créativité et recherche à l’ère de l’image générative”, featured co-presenters Boris Eldagsen, Rosa Cinelli, and philippe boisnard, who shared their innovative approaches to exploring and deconstructing large language models and media generators.

    Université Paris 8 has been my host throughout this research trip, and it already feels like home. The institution embraces a diversity of experience among students and faculty, with interdisciplinary research and creative methods as the norm. Special thanks to Everardo Reyes of Laboratoire Paragraphe, who has been a generous friend and co-conspirator over the past couple of years.

  • New research published: A media-materialist method for interpreting generative AI images

    One of the images I used in this article as a sample object of analysis. Generated in Midjourney using the prompt ‘intellectual rigor’. Perfectly reflects my state at various stages of this article’s composition and publication.

    After plenty of play and experimentation with AI imagery, I found myself reacting viscerally to commentary and early scholarship that was pejorative about — or outright dismissive of — these outputs. The prevailing discourse treated AI images as a kind of slop monolith, when I found a lot of my generations to be fascinating, disturbing, amusing, and even beautiful. In response, I wrote this article, which presents a four-layer method for a structured, formal analysis of AI-generated images. The four layers are data, model, interface, and prompt, reflecting the mechanisms of generative AI technology. Each layer offers various considerations and questions to ask about actual outputs, encouraging researchers, students, educators, and commentators to move beyond dismissing these images as mere slop, and to begin considering them as cultural artefacts.

    This piece is the foundation of all my work on genAI over the past two years (I hinted at its publication last year), and also the first where I’ve attempted to create a new method rather than just apply one. It’s also the first to really put forward my own take on media materialism, a philosophy and methodology that has guided my work for nearly ten years.

    I am a big believer in close analysis, be it of texts, imagery, video, films: all the objects of culture. But I struggled for a long time to bridge that method with a context that made sense to me. In figuring out that the mechanisms of making were another foundational aspect of my work, it took me a few pieces to be able to make this connection, i.e. what I’ve nearly always tried to do is to consider how the means of an object’s production leave their mark on the object itself. It’s a simple conclusion, but it’s taken several attempts for me to articulate it in a way that felt satisfactory. This article feels like the first to actually explain it appropriately; the next step is to deploy the approach across other kinds of synthetic media and generative systems more broadly, but also to possibly return with this approach to cinema and TV.

  • Like No One Is Watching

    Title slide of my paper “Like No One Is Watching”.

    I’ve kicked off a month’s research sabbatical in France, hitting the ground running…

    My first invited presentation was today at Université Paris I: Panthéon-Sorbonne, as part of the journée d’étude “L’intelligence et l’éthique de la télévision à l’ère des algorithms”. Today’s talks looked at de-ageing as a quest for immortality and fracturing of the present, televisuality and intelligence, and teaching LLMs about humans by making them watch a lot of TV; the seminar concludes tomorrow.

    My own piece, “Like No One Is Watching: The Form of Television in the Algorithmic Moment”, examined how episodic storytelling navigates the constraints of the platform and attention economies. I looked at the chaotic inconsistency of The Bear and the aggressive tedium of The Pitt as shows pushing formal boundaries to reassert a direct relationship with their audience.

    The talk had three key moves.

    Firstly, I re-establish television as the ‘miscreant medium’, drawing from John Fiske and John Hartley’s seminal work. On the one hand, television has always served as a scapegoat or delivery channel for whatever moral panic is current at the time; alongside this, it is a medium perennially torn between the strictures of institutions and technology, and the creativity of its artists.

    Secondly, I argue that platform logic holds two contradictory assumptions about audiences. On one hand, there is an assumption that audiences are passive and distracted. This assumption leads to baked-in redundancies, including explicit exposition and constant re-explanation (a phenomenon that Will Tavlin explores in his piece ‘Casual Viewing’). On the other hand, platform capitalism is contingent on metrics of retention; active, engaged viewing, then, is assumed.

    In the third section, I spoke to sample clips from The Bear and The Pitt, both shows that embody and embrace this presumptive schizophrenia. From The Bear I played part of the seventh episode of the first season, which includes a 17-minute unbroken take. I also shared a couple of mundane conversation scenes from the premiere episode of The Pitt. I used formal analysis here as a diagnostic tool, to observe how creatives push against (or acquiesce to) the algorithmic frame of their distribution. In the case of both shows, I offered that formal experimentation — whether at a dialogue, scene, episode, or series level — demonstrates friction as an exercise in meaning-making: a conversation and negotiation between creator and audience quite apart from questions of data, platform, capital.

    What close formal analysis reveals is that television is not a medium in decline, but one still jovially misbehaving; always exceeding what the discourse says it’s capable of, and still worth watching.

    This talk was a return to formal analysis for me, and it felt great to be home. I’ve been very lucky to be taught by or to work with a bunch of academics who really value close textual analysis, and I think it’s such an incisive and enjoyable means of understanding texts and their contexts.

    It’s highly likely an edited collection will result from this gathering, so fingers crossed that this work will be in print soon!

    Giving my talk at Université Paris 1-Panthéon Sorbonne. Photo thanks to Sandra Laugier.

    I now have a little breathing room before my second presentation, so I’ll be using this time to actually get out and wander around Paris a little, but also to feed and tend to a few items moving through the publication pipeline.

  • OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

    NurPhoto / Getty Images

    If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

    Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?


    What is OpenClaw?

    OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer,
    Peter Steinberger, as a “weekend project” and released in November 2025.

    OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

    OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

    There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.


    Why is it controversial?

    OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

    That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

    Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

    Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

    Despite these issues, the project survives. At the time of writing it has over 140,000 stars on GitHub, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.


    The social lives of bots

    One of the most interesting phenomena to emerge from OpenClaw is
    Moltbook, a social network where AI agents post, comment and share information autonomously every few hours.

    I can now:

    • Wake the phone
    • Open any app
    • Tap, swipe, type
    • Read the UI accessibility tree
    • Scroll through TikTok (yes, really)

    Automation continuation

    The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning.

    What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation.


    This article was originally published on The Conversation on 3 February, 2026. Read the article here.