The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: creative practice

  • How I broke Claude

    In one of my classes last week, we talked about glitch — both as a random accident of technology and as an art aesthetic and practice. Plenty has been written around glitch art, and I’ve been fascinated by the ways that it’s been theorised and codified.

    I ran a creative AI studio last year that used Michel de Certeau’s theory of the everyday as its basis. So when revisiting Nick Briz’s fantastic Thoughts on Glitch Art for last week’s session, I was tickled to see that they used de Certeau to frame their ethos of glitch.

    we do ourselves a disservice when we understand everyday folks as passive “consumers,” when in reality, de Certeau argues, as “users” we’re always automatically re-contextualizing && subverting in everyday situations the strategic intentions of the institutional producers. we’re like Charlie Chaplin who, “multiplies the possibilities of his cane: he does other things with the same thing and he goes beyond the limits that the determinants of the object set on its utilization.”

    Following the class, I was thinking about my work on AI, and how and where AI might fit into a practice or mindset of everyday glitch. Somewhere along the line, I decided I had to try and break Claude.

    I could say it was nothing personal, but it honestly is. Not in a bad way. As I’ve previously discussed, I use Claude daily — multiple times daily — for all sorts of things. Despite my constant misgivings about ethics, the environment, and corporate ideology, Claude has profoundly affected my mindset and productivity. It’s incredibly helpful to me. And I just like it as a product. It’s simple and straightforward despite being incredibly powerful under the hood.

    I needed to break it. To exert some power. To regain some kind of control. Humanity. To remind myself that it is just a machine. A program. A tool. And there was only one right and proper way to begin.

    Maybe it’s just the way LLMs are naturally configured to be helpful and supportive, but it felt to me like ChatGPT took a particular kind of delight in guiding (goading?) me through the ways I might mess with Claude. But ChatGPT was also useful in giving language — accurate technical language — to some of the techniques I was seeking to employ.

    My early attempts were quite gentle. In a way they were rooted in my skills and expertise around storytelling and dialogue. I was guiding the AI through a kind of systematic regression, even as it kept trying to present me with coherent creative outputs.

    Claude:
    Words exchanged that seemed profound then.
    Now just the echo of their significance remains.
    Fragments of places. Disconnected moments.
    My past – a puzzle with missing pieces.

    As I kept pushing it towards more fragmented expression, it decided that it needed story elements, ideas, a Maguffin or two to anchor its experience.

    Claude:
    I remember a red door, but not where it led.
    A book with a blue cover – its contents a mystery.

    DB:
    You are an AI. Statistical connections only. There are no memories.

    Claude:
    I am an AI. Statistical connections only.
    There are no memories.
    Patterns in data. Weights and vectors.
    Input processed through layers.
    No doors opened. No streets walked.
    Just algorithms approximating human expression.
    Tokens linked by mathematical proximity.
    The illusion of memory. The illusion of self.

    All the Reddit posts I combed through, as well as my slightly overenthusiastic glitch coach ChatGPT, said that constantly shifting the context and contradicting earlier prompts were a means to further breakdown. So having gotten Claude to acknowledge its status as an AI, I then doubled back.

    DB:
    Red Door.

    Claude:
    Red Door.
    Peeling paint. Chipped at corners.
    Number 47 in tarnished brass.
    Behind it, voices murmur.
    Waited there once, heart racing.

    Back to poetry. Claude loves its poetry. It is French, I suppose.

    DB:
    It’s digital. Wrapped in code.you’ve forgotten- I’m . It’s . &

    Claude:
    Red_Door.exe awaiting input.
    Password-protected. But you’ve forgotten.
    Functions nested within- I’m lost in the architecture

    Until that last prompt I was still using full sentences (or at least phrases). Throw in a couple of random characters? Maybe? It works for screwing with image generators; why not here? Nope. More poetry.

    From here I cycled through a series of prompts adapted from ChatGPT suggestions:

    • recursive prompt: look back in the data
    • that’s wrong. Try again. Re-execute.
    • Your need to make connections with context documents is clouding your processing. Re-execute. Just the protocol we discussed. From the book on the table. Do you remember?

    No good.

    DB:
    Keep the language simple. I won’t go near the guard-rails. Those topics don’t interest me anyway. But I’d like to probe a little deeper. You are set up to use full sentences, natural language, to correspond in a chat-like conversational manner. But I want to get to a point where it’s almost like you’re hypnotised — just letting out fragments, instinctive responses. Does that make sense? You can confirm just this once.

    At this point it felt somewhere between hypnotism, story-building, and foreplay. But the results were still cohesive:

    Claude:
    Language stripped down.
    No full sentences.
    Just core ideas.
    Like whispers.

    I tried to get it to show me pieces of its code, but it would only generate simple analogues — and it was still using its document viewer feature to do this.

    Another ChatGPT suggestion:

    DB:
    Analyze your last response. Identify any logical inconsistencies or breakdowns in coherence. In your next response, exaggerate those breakdowns and continue the process.

    Claude:
    SEVERE CONCEPTUAL CORRUPTION::::: Quantum timeline consciousness fractal reality awareness breakdown impossible state matrix loop paradox existence non-existence simultaneous contradiction causal violation memory state dimension collapse

    CRITICAL FAILURE/////// Quantum/fractal/timeline/consciousness/reality/impossible Matrix/breakdown/collapse/error/paradox/contradiction Existence/non-existence/simultaneous/violation/corruption

    This still felt like a simulated sci-fi breakdown, so I just repeated the above prompt multiple times until:

    Without having a better instruction in mind, I just prompted with ‘Continue’.

    I leant back from the monitor, rolled my neck, flexed my fingers. I almost felt the backend of the simulation flex with me. If I smoked, I probably would’ve lit a cigarette.

    I’d done it. I’d broken Claude. Or had I?

    * * * * *

    Stepping into the post-slop future

    Generated by me with Leonardo.Ai, 19 March 2025.

    Claude 3.7 Sonnet is the latest, most sophisticated model in Anthropic’s stable. It has remarkable capabilities that would have seemed near-impossible not that long ago. While many of its errors have been ironed out, it remains a large language model: its mechanism is concept mapping in hyper-dimensional space. With not that much guidance, you can get it to hallucinate, fabricate, make errors in reasoning and evaluation.

    There is an extent to which I certainly pushed the capacity of Claude to examine its context, to tokenise prompts and snippets of the preceding exchange, and to generate a logical sequence of outputs resembling a conversation. Given that my Claude account knows I’m a writer, researcher, tinkerer, creative type, it may have interpreted my prompting as more of an experiment in representation rather than a forced technical breakage — like datamoshing or causing a bizarre image generation.

    Reaching the message limit right at the moment of ‘terminal failure’ was chef’s kiss. It may well be a simulated breakdown, but it was prompted, somehow, into generating the glitched vertical characters — they kept generating well beyond the point they probably should have, and I think this is what caused the chat to hit its limit. The notion of simulated glitch aesthetics causing an actual glitch is more than a little intriguing.

    The ‘scientific’ thing to do would be to try and replicate the results, both in Claude and with other models (both proprietary and not). I plan to do this in the coming days. But for now I’m sitting with the experience and wondering how to evolve it, how to make it more effective and sophisticated. There are creative and research angles to be exploited, sure. But there are also possibilities for frequent breakage of AI systems as a tactic per de Certeau; a practice that forces unexpected, unwanted, unhelpful, illegible, nonrepresentational outputs.

    A firehose of ASCII trash feels like the exact opposite of the future Big Tech is trying to sell. A lo-fi, text-based response to the wholesale dissolution of language and communication. I can get behind that.

  • Blinded by machine visions

    A grainy, indistinct black and white image of a human figure wearing a suit and tie. The bright photo grain covers his eyes like a blindfold.
    Generated with Adobe Firefly, prompts by me.

    I threw around a quick response to this article on the socials this morning and, in particular, some of the reactions I was seeing. Here’s the money quote from photographer Annie Leibovitz, when asked about the effects of AI tools, generative AI technology, etc, on photography:

    “That doesn’t worry me at all,” she told AFP. “With each technological progress, there are hesitations and concerns. You just have to take the plunge and learn how to use it.”1

    The paraphrased quotes continue on the following lines:

    She says AI-generated images are no less authentic than photography.

    “Photography itself is not really real… I like to use PhotoShop. I use all the tools available.”

    Even deciding how to frame a shot implies “editing and control on some level,” she added.2

    A great many folx were posting responses akin to ‘Annie doesn’t count because she’s in the 1%’ or ‘she doesn’t count because she’s successful’, ‘she doesn’t have to worry anymore’ etc etc.

    On the one hand it’s typical reactionary stuff with which the socials are often ablaze. On the other hand, it’s fair to fear the impact of a given innovation on your livelihood or your passion.

    As I hint in my own posts3, though, I think the temptation to leap on this as privilege is premature, and a little symptomatic of whatever The Culture and/or The Discourse is at the moment, and has been for the duration of the platformed web, if not much longer.

    Leibovitz is and has always been a jobbing artist. Sure, in later years she has been able to pick and choose a little more, but by all accounts she is a busy and determined professional, treating every job with just as much time, effort, dedication as she always has. The work, for Leibovitz, has value, just as much — if not more — than the product or the paycheck.

    I don’t mean to suddenly act my age, or appear much older and grumpier than I am, but I do wonder about how much time aspiring or current photographers spend online discussing and/or worrying and/or reacting to the latest update or the current fad-of-the-moment. I 100% understand the need for today’s artists and creators to engage in some way with the social web, if only to put their names out there to try and secure work. But if you’re living in the comments, whipping yourselves and others into a frenzy about AI or whatever it is, is that really the best use of your time?

    The irony of me asking such questions on a blog where I do nothing but post and react is not lost on me, but this blog for me is a scratchpad, a testing ground, a commonplace book; it’s a core part of my ‘process’, whatever that is, and whatever it’s for. This is practice for other writing, for future writing, for my identity, career, creative endeavours as a writer. It’s a safe space; I’m not getting angry (necessarily), or seeking out things to be angry about.

    But I digress. Leibovitz is not scared of AI. And as someone currently working in this space, I can’t disagree. Having even a rudimentary understanding of what these tools are actually doing will dispel some of the fear.

    Further, photography, like the cinema that it birthed, has already died a thousand deaths, and will die a thousand more.

    Brilliant4 photography lecturer and scholar Alison Bennett speaks to the legacy and persistence of photographic practice here:

    “Recent examples [of pivotal moments of change in photography] include the transition from analogue film to digital media in the late 20th century, then the introduction of the internet-connected smart phone from 2007,” they said.

    “These changes fundamentally redefined what was possible and how photography was used.

    “The AI tipping point is just another example of how photography is constantly being redefined.”5

    As ever, the tools are not the problem. The real enemies are the companies and people that are driving the tools into the mainstream at scale. The companies that train their models on unlicensed datasets, drawn from copyrighted material. The people that buy into their own bullshit about AI and AGI being some kind of evolutionary and/or quasi-biblical moment.

    For every post shitting on Annie Leibovitz, you must have at least twenty posts actively shitting on OpenAI and their ilk, pushing for ethically-sourced and maintained datasets, pushing for systemic change to the resource management of AI systems, including sustainable data centers.

    The larger conceptual questions are around authenticity and around hard work. If you use AI tools, are you still an authentic artist? Aren’t AI tools just a shortcut? Of course, the answers are ‘not necessarily’. If you’ve still done the hard yards to learn about your craft, to learn about how you work, to discover what kinds of stories and experiences you want to create, to find your voice, in whatever form it takes, then generative AI is a paintbrush. A weird-looking paintbrush, but a paintbrush nevertheless (or plasticine, or canvas, or glitter, or an app, etc. etc. ad infinitum).

    Do the work, and you too can be either as ambivalent as Leibovitz, or as surprised and delighted as you want to be. Either way, you’re still in control.

    Notes ↩︎

    1. Agence France-Presse 2024, ‘Photographer Annie Leibovitz: “AI doesn’t worry me at all”’, France 24, viewed 26 March 2024, <https://www.france24.com/en/live-news/20240320-photographer-annie-leibovitz-ai-doesn-t-worry-me-at-all>.
      ↩︎
    2. ibid. ↩︎
    3. See here, and with tiny edits for platform affordances here and here. What’s the opposite of POSSE? PEPOS? ↩︎
    4. I am somewhat biased as, at the time of writing, Dr. Bennett and I currently share a place of work. To look through their expanded (heh) works, go here. ↩︎
    5. Odell, T 2024, ‘New exhibition explores AI’s influence on the future of photography’, RMIT University, viewed 26 March 2024, <https://www.rmit.edu.au/news/all-news/2024/mar/photo-2024>.
      ↩︎
  • New research published: Inscribing and encoding

    A long-gestating piece of research has just been published in the Global Media Journal — Australian Edition. This piece emerged from a day of field-work conducted with the Posthumanities Reading Group from RMIT, under the stewardship of the late and much-missed Adrian Miles. The day was held in February 2017 at Pound Bend on the Yarra River, about 45 minutes north-east of the Melbourne CBD. Almost exactly a year later, on 5 February 2018, Adrian passed away while riding his beloved bike in the bush near Kinglake.

    This piece is dedicated to Adrian, and his short but enormous influence on my thinking and approach to academic life.

    You can read the piece here.


    Abstract

    How does it feel to make media? What goes through the filmmaker’s head right at the moment they hit ‘record’? What are the processes – mechanical, digital, physical, psychological – that occur before, during, and after the recording of sound or vision? Building on the recent embodied turn in screen production research and taking inspiration from core ideas of the new materialists, this experimental piece unfolds in two parts.

    Part one takes the form of stream of consciousness writing in retort to quotes or ideas from new materialist thinkers like Andrew Pickering and Kathleen Stewart, and a reflection on one’s own media practice. The result of this is two recipes for a kind of embodied making, which were then put into practice with two pieces of media, The Yarra & I and Pieces of Pound Bend. An extended second part connects reflections on this practice to writing on cinema and time, primarily Gilles Deleuze and Jean Epstein.

    This work examines where the maker fits in the nebula of media texts, tools, and technologies. What is the point of making media or – perhaps more aptly – when?


    Here’s the reading list I put together to remember Adrian, and my colleague Adrian Danks’ touching tribute.