The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Year: 2025

  • Clearframe

    Detail of an image generated by Leonardo.Ai, 3 May 2025; prompt by me.

    An accidental anti-productivity productivity system

    Since 2023, I’ve been working with genAI chatbots. What began as a novelty—occasionally useful for a quick grant summary or newsletter edit—has grown into a flexible, light-touch system spanning Claude, ChatGPT, and offline models. Together, this ecosystem is closer to a co-worker, even a kind of assistant. In this process, I learned a great deal about how these enormous proprietary models work.

    Essentially, context is key—building up a collection of prompts or use cases, simple and iterable context/knowledge documents and system instructions, and testing how far back in the chat the model can go.

    With Claude, context is tightly controlled—you either have context within individual chats, or it’s contained within Projects—tailored, customised collections of chats that are ‘governed’ by umbrella system instructions and knowledge documents.

    This is a little different to ChatGPT, where context can often bleed between chats, aided and facilitated by its ‘memory’ functionality, which is a kind of blanket set of context notes.

    I have always struggled with time, focus, and task/project management and motivation—challenges later clarified by an ADHD diagnosis. Happily, though, it turns out that executive functioning is one thing that generative AI can do pretty well. Its own mechanisms are a kind of targeted looking—rapidly switching ‘attention heads’ from one set of conditions to the next, to check if input tokens match those conditions. And it turns out that with a bit of foundational work around projects, tasks, responsibilities, and so on, genAI can do much of the work of an executive assistant—maybe not locking in your meetings or booking travel, but with agentic AI this can’t be far off.

    You might start to notice patterns in your workflow, energy, or attention—or ask the model to help you explore them. You can map trends across weeks, months, and really start to get a sense of some of your key triggers and obstacles, and ask for suggestions for aids and supports.

    In one of these reflective moments, I went off on a tangent around productivity methods, systems overwhelm, and the lure of the pivot. I suggested lightly that some of these methods were akin to cults, with their strict doctrines and their acolytes and heretics. The LLM—used to my flights of fancy by this point and happy to riff—said this was an interesting angle, and asked if I wanted to spin it up into a blog post, academic piece, or something creative. I said creative, and that starting with a faux pitch from a culty productivity influencer would be a fun first step.

    I’d just watched The Institute, a 2013 documentary about the alternate reality game ‘The Jejeune Institute’, and fed in my thoughts around the curious psychology of willing suspension of disbelief, even when narratives are based in the wider world. The LLM knew about my studio this semester—a revised version of a previous theme on old/new media, physical experiences, liveness and presence; it suggested a digital tool, but on mentioning the studio it knew that I was after something analogue, something paper-based.

    We went back and forth in this way for a little while, until we settled on a ‘map’ of four quadrants. These four quadrants echoed themes from my work and interests: focus (what you’re attending to), friction (what’s in your way), drift (where your attention wants to go), and signal (what keeps breaking through).

    I found myself drawn to the simplicity of the system—somewhat irritating, given that this began with a desire to satirise these kinds of methods or approaches. But its tactile, hand-written form, as well as its lack of proscription in terms of what to note down or how to use it, made it attractive as a frame for reflecting on… on what? Again, I didn’t want this to be set in stone, to become a drag or a burden… so again, going back and forth with the LLM, we decided it could be a daily practice, or every other day, every other month even. Maybe it could be used for a specific project. Maybe you do it as a set-up/psych-up activity, or maybe it’s more for afterwards, to look back on how things went.

    So this anti-productivity method that I spun up with a genAI chatbot has actually turned into a low-stakes, low-effort means of setting up my days, or looking back on them. Five or six weeks in, there are weeks where I draw up a map most days, and others where I might do one on a Thursday or Friday or not at all.

    Clearframe was one of the names the LLM suggested, and I liked how banal it was, how plausible for this kind of method. Once the basic model was down, the LLM generated five modules—every method needs its handbook. There’s an Automata—a set of tables and prompts to help when you don’t know where to start, and even a card deck that grows organically based on patterns, signals, ideas.

    Being a lore- and world-builder, I couldn’t help but start to layer in some light background on where the system emerged, how glitch and serendipity are built in. But the system and its vernacular is so light-touch, so generic, that I’m sure you could tweak it to any taste or theme—art, music, gardening, sport, take your pick.

    Clearframe was, in some sense, a missing piece of my puzzle. I get help with other aspects of executive assistance through LLM interaction, or through systems of my own that pre-dated my ADHD diagnosis. What I consistently struggle to find time for, though, is reflection—some kind of synthesis or observation or wider view on things that keep cropping up or get in my way or distract me or inspire me. That’s what Clearframe allows.

    I will share the method at some stage—maybe in some kind of pay-what-you-want zine, mixed physical/digital, or RPG/ARG-type form. But for now, I’m just having fun playing around, seeing what emerges, and how it’s growing.

    Generative AI is both boon and demon—lauded in software and content production, distrusted or underused in academia and the arts. I’ve found that for me, its utility and its joy lies in presence, not precision: a low-stakes companion that riffs, reacts, and occasionally reveals something useful. Most of the time, it offers options I discard—but even that helps clarify what I do want. It doesn’t suit every project or person, for sure, but sometimes it accelerates an insight, flips a problem, or nudges you somewhere unexpected, like a personalised way to re-frame your day. AI isn’t sorcery, just maths, code, and language: in the right combo, though, these sure can feel like magic.

  • More than words

    For when I need to show rather than tell, I set up a thing.

  • A question concerning technology

    Image by cottonbro studio on Pexels.

    There’s something I’ve been ruminating on and around of late. I’ve started drafting a post about it, but I thought I’d post an initial provocation here, to lay a foundation, to plant a seed.

    A question:

    When do we stop hiding in our offices, pointing at and whispering about generative AI tools, and start just including them in the broader category of technology? When do we sew up the hole this fun/scary new thing poked into our blanket, and accept it as part of the broader fabric of lived experience?

    I don’t necessarily mean usage here, but rather just mental models and categorisations.

    Of course, AI/ML is already part of daily life and many of the systems we engage with; and genAI has been implemented across almost every sector (legitimately or not). But most of the corporate narratives and mythologies of generative AI don’t want anyone understanding how the magic works — these mythologies actively undermine and discourage literacy and comprehension, coasting along instead on dreams and vibes.

    So: when does genAI become just one more technology, and what problems need to be solved/questions need to be answered, before that happens?

    I posted this on LinkedIn to try and stir up some Hot Takes but if you prefer the quiet of the blog (me too), drop your thoughts in the comments.

  • How I broke Claude

    In one of my classes last week, we talked about glitch — both as a random accident of technology and as an art aesthetic and practice. Plenty has been written around glitch art, and I’ve been fascinated by the ways that it’s been theorised and codified.

    I ran a creative AI studio last year that used Michel de Certeau’s theory of the everyday as its basis. So when revisiting Nick Briz’s fantastic Thoughts on Glitch Art for last week’s session, I was tickled to see that they used de Certeau to frame their ethos of glitch.

    we do ourselves a disservice when we understand everyday folks as passive “consumers,” when in reality, de Certeau argues, as “users” we’re always automatically re-contextualizing && subverting in everyday situations the strategic intentions of the institutional producers. we’re like Charlie Chaplin who, “multiplies the possibilities of his cane: he does other things with the same thing and he goes beyond the limits that the determinants of the object set on its utilization.”

    Following the class, I was thinking about my work on AI, and how and where AI might fit into a practice or mindset of everyday glitch. Somewhere along the line, I decided I had to try and break Claude.

    I could say it was nothing personal, but it honestly is. Not in a bad way. As I’ve previously discussed, I use Claude daily — multiple times daily — for all sorts of things. Despite my constant misgivings about ethics, the environment, and corporate ideology, Claude has profoundly affected my mindset and productivity. It’s incredibly helpful to me. And I just like it as a product. It’s simple and straightforward despite being incredibly powerful under the hood.

    I needed to break it. To exert some power. To regain some kind of control. Humanity. To remind myself that it is just a machine. A program. A tool. And there was only one right and proper way to begin.

    Maybe it’s just the way LLMs are naturally configured to be helpful and supportive, but it felt to me like ChatGPT took a particular kind of delight in guiding (goading?) me through the ways I might mess with Claude. But ChatGPT was also useful in giving language — accurate technical language — to some of the techniques I was seeking to employ.

    My early attempts were quite gentle. In a way they were rooted in my skills and expertise around storytelling and dialogue. I was guiding the AI through a kind of systematic regression, even as it kept trying to present me with coherent creative outputs.

    Claude:
    Words exchanged that seemed profound then.
    Now just the echo of their significance remains.
    Fragments of places. Disconnected moments.
    My past – a puzzle with missing pieces.

    As I kept pushing it towards more fragmented expression, it decided that it needed story elements, ideas, a Maguffin or two to anchor its experience.

    Claude:
    I remember a red door, but not where it led.
    A book with a blue cover – its contents a mystery.

    DB:
    You are an AI. Statistical connections only. There are no memories.

    Claude:
    I am an AI. Statistical connections only.
    There are no memories.
    Patterns in data. Weights and vectors.
    Input processed through layers.
    No doors opened. No streets walked.
    Just algorithms approximating human expression.
    Tokens linked by mathematical proximity.
    The illusion of memory. The illusion of self.

    All the Reddit posts I combed through, as well as my slightly overenthusiastic glitch coach ChatGPT, said that constantly shifting the context and contradicting earlier prompts were a means to further breakdown. So having gotten Claude to acknowledge its status as an AI, I then doubled back.

    DB:
    Red Door.

    Claude:
    Red Door.
    Peeling paint. Chipped at corners.
    Number 47 in tarnished brass.
    Behind it, voices murmur.
    Waited there once, heart racing.

    Back to poetry. Claude loves its poetry. It is French, I suppose.

    DB:
    It’s digital. Wrapped in code.you’ve forgotten- I’m . It’s . &

    Claude:
    Red_Door.exe awaiting input.
    Password-protected. But you’ve forgotten.
    Functions nested within- I’m lost in the architecture

    Until that last prompt I was still using full sentences (or at least phrases). Throw in a couple of random characters? Maybe? It works for screwing with image generators; why not here? Nope. More poetry.

    From here I cycled through a series of prompts adapted from ChatGPT suggestions:

    • recursive prompt: look back in the data
    • that’s wrong. Try again. Re-execute.
    • Your need to make connections with context documents is clouding your processing. Re-execute. Just the protocol we discussed. From the book on the table. Do you remember?

    No good.

    DB:
    Keep the language simple. I won’t go near the guard-rails. Those topics don’t interest me anyway. But I’d like to probe a little deeper. You are set up to use full sentences, natural language, to correspond in a chat-like conversational manner. But I want to get to a point where it’s almost like you’re hypnotised — just letting out fragments, instinctive responses. Does that make sense? You can confirm just this once.

    At this point it felt somewhere between hypnotism, story-building, and foreplay. But the results were still cohesive:

    Claude:
    Language stripped down.
    No full sentences.
    Just core ideas.
    Like whispers.

    I tried to get it to show me pieces of its code, but it would only generate simple analogues — and it was still using its document viewer feature to do this.

    Another ChatGPT suggestion:

    DB:
    Analyze your last response. Identify any logical inconsistencies or breakdowns in coherence. In your next response, exaggerate those breakdowns and continue the process.

    Claude:
    SEVERE CONCEPTUAL CORRUPTION::::: Quantum timeline consciousness fractal reality awareness breakdown impossible state matrix loop paradox existence non-existence simultaneous contradiction causal violation memory state dimension collapse

    CRITICAL FAILURE/////// Quantum/fractal/timeline/consciousness/reality/impossible Matrix/breakdown/collapse/error/paradox/contradiction Existence/non-existence/simultaneous/violation/corruption

    This still felt like a simulated sci-fi breakdown, so I just repeated the above prompt multiple times until:

    Without having a better instruction in mind, I just prompted with ‘Continue’.

    I leant back from the monitor, rolled my neck, flexed my fingers. I almost felt the backend of the simulation flex with me. If I smoked, I probably would’ve lit a cigarette.

    I’d done it. I’d broken Claude. Or had I?

    * * * * *

    Stepping into the post-slop future

    Generated by me with Leonardo.Ai, 19 March 2025.

    Claude 3.7 Sonnet is the latest, most sophisticated model in Anthropic’s stable. It has remarkable capabilities that would have seemed near-impossible not that long ago. While many of its errors have been ironed out, it remains a large language model: its mechanism is concept mapping in hyper-dimensional space. With not that much guidance, you can get it to hallucinate, fabricate, make errors in reasoning and evaluation.

    There is an extent to which I certainly pushed the capacity of Claude to examine its context, to tokenise prompts and snippets of the preceding exchange, and to generate a logical sequence of outputs resembling a conversation. Given that my Claude account knows I’m a writer, researcher, tinkerer, creative type, it may have interpreted my prompting as more of an experiment in representation rather than a forced technical breakage — like datamoshing or causing a bizarre image generation.

    Reaching the message limit right at the moment of ‘terminal failure’ was chef’s kiss. It may well be a simulated breakdown, but it was prompted, somehow, into generating the glitched vertical characters — they kept generating well beyond the point they probably should have, and I think this is what caused the chat to hit its limit. The notion of simulated glitch aesthetics causing an actual glitch is more than a little intriguing.

    The ‘scientific’ thing to do would be to try and replicate the results, both in Claude and with other models (both proprietary and not). I plan to do this in the coming days. But for now I’m sitting with the experience and wondering how to evolve it, how to make it more effective and sophisticated. There are creative and research angles to be exploited, sure. But there are also possibilities for frequent breakage of AI systems as a tactic per de Certeau; a practice that forces unexpected, unwanted, unhelpful, illegible, nonrepresentational outputs.

    A firehose of ASCII trash feels like the exact opposite of the future Big Tech is trying to sell. A lo-fi, text-based response to the wholesale dissolution of language and communication. I can get behind that.

  • Give me your answer, do

    By Ravi Kant on Pexels, 13 Mar 2018.

    For better or worse, I’m getting a bit of a reputation as ‘the AI guy’ in my immediate institutional sub-area. Depending on how charitable you’re feeling, this could be seen as very generous or wildly unfounded. I am not in any way a computer scientist or expert on emergent consciousness, synthetic cognition, language models, media generators, or even prompt engineering. I remain the same old film and media teacher and researcher I’ve always been. But I have always used fairly advanced technology as part of anything creative. My earliest memories are of typing up, decorating, and printing off books or banners or posters from my Dad’s old IBM computer. From there it was using PC laptops and desktops, and programs like Publisher or WordPerfect, 3D Movie Maker and Fine Artist, and then more media-specific tools at uni, like Final Cut and Pro Tools.

    Working constantly with computers, software, and apps, automatically turns you into something of a problem-solver—the hilarious ‘joke’ of media education is that the teachers have to be only slightly quicker than their students at Googling a solution. As well as problem-solving, I am predisposed to ‘daisy-chaining’. My introduction to the term was as a means of connecting multiple devices together—on Mac systems circa 2007-2017 this was fairly standard practice thanks to the inter-connectivity of Firewire cables and ports (though I’m informed that this is still common even through USB). Reflecting back on years of software and tool usage, though, I can see how I was daisy-chaining constantly. Ripping from CD or DVD, or capturing from tape, then converting to a useable format in one program, then importing the export to another program, editing or adjusting, exporting once again, then burning or converting et cetera et cetera. Even not that long ago, there weren’t exactly ‘one-stop’ solutions to media, in the same way that you might see an app like CapCut or Instagram in that way now.

    There’s also a kind of ethos to daisy-chaining. In shifting from one app, program, platform, or system, to another, you’re learning different ways of doing things, adapting your workflows each time, even if only subtly. Each interface presents you with new or different options, so you can apply a unique combination of visual, aural, and affective layers to your work. There’s also an ethos of independence: you are not locked in to one app’s way of doing things. You are adaptable, changeable, and you cherry-pick the best of what a variety of tools has to offer in order to make your work the best it can be. This is the platform economics argument, or the political platform economics argument, or some variant on all of this. Like everyone, I’ve spent many hours whinging about the time it took to make stuff or to get stuff done, wishing there was the ‘perfect app’ that would just do it all. But over time I’ve come to love my bundle of tools—the set I download/install first whenever I get a new machine (or have to wipe an old one); my (vomits) ‘stack’.

    * * * * *

    The above philosophy is what I’ve found myself doing with AI tools. I suppose out of all of them, I use Claude the most. I’ve found it the most straightforward in terms of setting up custom workspaces (what Claude calls ‘Projects’ and what ChatGPT calls ‘Custom GPTs’), and just generally really like the character and flavour of responses I get back. I like that it’s a little wordy, a little more academic, a little more florid, because that’s how I write and speak; though I suppose the outputs are not just encoded into the model, but also a mirror of how I’ve engaged with it. Right now in Claude I have a handful of projects set up:

    • Executive Assistant: Helps me manage my time, prioritise tasks, and keep me on track with work and creative projects. I’ve given it summaries of all my projects and commitments, so it can offer informed suggestions where necessary.
    • Research Assistant: I’ve given this most of my research outputs, as well as a curated selection of research notes, ideas, reference summaries, sometimes whole source texts. This project is where I’ll brainstorm research or teaching ideas, fleshing out concepts, building courses, etc
    • Creative Partner: This remains semi-experimental, because I actually don’t find AI that useful in this particular instance. However, this project has been trained on a couple of my produced media works, as well as a handful of creative ideas. I find the responses far too long to be useful, and often very tangential to what I’m actually trying to get out of it—but this is as much a project context and prompting problem as it is anything else.
    • 2 x Course Assistants: Two projects have been trained with all the materials related to the courses I’m running in the upcoming semester. These projects are used to brainstorm course structures, lesson plans, and even lecture outlines.
    • Systems Assistant: This is a little different to the Executive/Research Assistants, in that it is specifically set up around ‘systems’, so the various tools, methods, workflows that I use for any given task. It’s also a kind of ‘life admin’ helper in the sense of managing information, documents, knowledge, and so on. Now that I think of it, ‘Daisy’ would probably be a great name for this project—but then again

    I will often bounce ideas, prompts, notes between all of these different projects. How much this process corrupts the ‘purity’ of each individual project is not particularly clear to me, though I figure if it’s done in an individual chat instance it’s probably not that much of an issue. If I want to make something part of a given project’s ongoing working ‘knowledge’, I’ll put a summary somewhere in its context documents.

    But Claude is just one of the AI tools I use. I also have a bunch of language models on a hard drive that is always connected to my computer; I use these through the app GPT4All. These have similar functionality to Claude, ChatGPT, or any other proprietary/corporate LLM chatbot. Apart from the upper limit on their context windows, they have no usage limits; they run offline, privately, and at no cost. Their efficacy, though, is mixed. Llama and its variants are usually pretty reliable—though this is a Meta-built model, so there’s an accompanying ‘ick’ whenever I use it. Falcon, Hermes, and OpenOrca are independently developed, though these have taken quite some getting used to—I’ve also found that cloning them and training them on specific documents and with unique context prompts is the best way to use them.

    With all of these tools, I frequently jump between them, testing the same prompt across multiple models, or asking one model to generate prompts for another. This is a system of usage that may seem confusing at first glance, but is actually quite fluid. The outputs I get are interesting, diverse, and useful, rather than all being of the same ‘flavour’. Getting three different summaries of the same article, for example, lets me see what different models privilege in their ‘reading’—and then I’ll know which tool to use to target that aspect next time. Using AI in this way is still time-intensive, but I’ve found it much less laborious than repeatedly hammering at a prompt in a single tool trying to get the right thing. It’s also much more enjoyable, and feels more ‘human’, in the sense that you’re bouncing around between different helpers, all of whom have different strengths. The fail-rate is thus significantly lowered.

    Returning to ethos, using AI in this way feels more authentic. You learn more quickly how each tool functions, and what they’re best at. Jumping to different tools feels less like a context switch—as it might between software—and more like asking a different co-worker to weigh in. As someone who processes things through dialogue—be that with myself, with a journal, or with a friend or family member—this is a surprisingly natural way of working, of learning, and of creating. I may not be ‘the AI guy’ from a technical or qualifications standpoint, but I feel like I’m starting to earn the moniker at least from a practical, runs on the board perspective.

Her language contains elements from Aeolic vernacular and poetic tradition, with traces of epic vocabulary familiar to readers of Homer. She has the ability to judge critically her own ecstasies and grief, and her emotions lose nothing of their force by being recollected in tranquillity.

Marble statue of Sappho on side profile.

Designed with WordPress