The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: technology

  • Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings

    AI-generated with Leonardo Phoenix 1.0. Author supplied

    Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content that feels a little too real.

    The markers of AI-generated media are becoming harder to spot as technology companies work to iron out the kinks in their generative artificial intelligence (AI) models.

    But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.

    When AI hallucinates, contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.

    In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.

    Welcome to the Slopocene

    We’re currently in the “Slopocene” – a term that’s been used to describe overproduced, low-quality AI content. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.

    AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues large language models (LLMs) hallucinate all the time, and it’s only when they

    go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

    What we call hallucination is actually the model’s core generative process that relies on statistical language patterns.

    In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.

    This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.

    Pushing a chatbot to its limits

    If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?

    With this in mind, I decided to “break” Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.

    The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.

    A language model in collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding its usual guardrails and running until the system cut it off. Screenshot by author.

    Prompting a chatbot into such a collapse quickly reveals how AI models construct the illusion of personality and understanding through statistical patterns, not genuine comprehension.

    Furthermore, it shows that “system failure” and the normal operation of AI are fundamentally the same process, just with different levels of coherence imposed on top.

    ‘Rewilding’ AI media

    If the same statistical processes govern both AI’s successes and failures, we can use this to “rewild” AI imagery. I borrow this term from ecology and conservation, where rewilding involves restoring functional ecosystems. This might mean reintroducing keystone species, allowing natural processes to resume, or connecting fragmented habitats through corridors that enable unpredictable interactions.

    Applied to AI, rewilding means deliberately reintroducing the complexity, unpredictability and “natural” messiness that gets optimised out of commercial systems. Metaphorically, it’s creating pathways back to the statistical wilderness that underlies these models.

    Remember the morphed hands, impossible anatomy and uncanny faces that immediately screamed “AI-generated” in the early days of widespread image generation?

    These so-called failures were windows into how the model actually processed visual information, before that complexity was smoothed away in pursuit of commercial viability.

    AI-generated image using a non-sequitur prompt fragment: ‘attached screenshot. It’s urgent that I see your project to assess’. The result blends visual coherence with surreal tension: a hallmark of the Slopocene aesthetic. AI-generated with Leonardo Phoenix 1.0, prompt fragment by author.

    You can try AI rewilding yourself with any online image generator.

    Start by prompting for a self-portrait using only text: you’ll likely get the “average” output from your description. Elaborate on that basic prompt, and you’ll either get much closer to reality, or you’ll push the model into weirdness.

    Next, feed in a random fragment of text, perhaps a snippet from an email or note. What does the output try to show? What words has it latched onto? Finally, try symbols only: punctuation, ASCII, unicode. What does the model hallucinate into view?

    The output – weird, uncanny, perhaps surreal – can help reveal the hidden associations between text and visuals that are embedded within the models.

    Insight through misuse

    Creative AI misuse offers three concrete benefits.

    First, it reveals bias and limitations in ways normal usage masks: you can uncover what a model “sees” when it can’t rely on conventional logic.

    Second, it teaches us about AI decision-making by forcing models to show their work when they’re confused.

    Third, it builds critical AI literacy by demystifying these systems through hands-on experimentation. Critical AI literacy provides methods for diagnostic experimentation, such as testing – and often misusing – AI to understand its statistical patterns and decision-making processes.

    These skills become more urgent as AI systems grow more sophisticated and ubiquitous. They’re being integrated in everything from search to social media to creative software.

    When someone generates an image, writes with AI assistance or relies on algorithmic recommendations, they’re entering a collaborative relationship with a system that has particular biases, capabilities and blind spots.

    Rather than mindlessly adopting or reflexively rejecting these tools, we can develop critical AI literacy by exploring the Slopocene and witnessing what happens when AI tools “break”.

    This isn’t about becoming more efficient AI users. It’s about maintaining agency in relationships with systems designed to be persuasive, predictive and opaque.


    This article was originally published on The Conversation on 1 July, 2025. Read the article here.

  • Re-Wilding AI

    Here’s a recorded version of a workshop I first delivered at the Artificial Visionaries symposium at the University of Queensland in November 2024. I’ve used chunks/versions of it since in my teaching and parts of my research and practice.

  • Why can’t you just THINK?!

    Image generated by Leonardo.Ai, 20 May 2025; prompt by me.

    “Just use your imagination” / “Try thinking like a normal person”

    There is this wonderful reactionary nonsense flying around that making use of generative AI is an excuse, that it’s a cop-out, that it’s dumbing down society, that it’s killing our imaginations and the rest of what makes us human. That people need AI because they lack the ability to come up with fresh new ideas, or to make connections between them. I’ve seen this in social posts, videos, reels, and comments, not to mention Reddit threads, and in conversation with colleagues and students.

    Now — this isn’t to say that some uses of generative AI aren’t light-touch, or couldn’t just as easily be done with tools or methods that have worked fine for decades. Nor is it to say that generative AI doesn’t have its problems: misinformation/hallucination, data ethics, and environmental impacts.

    But what I would say is that for many people, myself very much included, thinking, connecting, synthesising, imagine — these aren’t the problem. What creatives, knowledge workers, artists often struggle with — not to mention those with different brain wirings for whom the world can be an overwhelming place just as a baseline — is:

    1. stopping or slowing the number of thoughts, ideas, imaginings, such that we can
    2. get them into some kind of order or structure, so we can figure out
    3. what anxieties, issues, and concerns are legitimate or unwarranted, and also
    4. which ideas are worth developing, to then
    5. create strategies to manage or alleviate the anxieties while also
    6. figuring out how to develop and build on the good ideas

    For some, once you reach step f., there’s still the barrier of starting. For those OK with starting, there’s the problem of carrying on, of keeping up momentum, or of completing and delivering/publishing/sharing.

    I’ve found generative AI incredibly helpful for stepping me through one or more of these stages, for body-doubling and helping me stop and celebrate wins, suggesting or triggering moments of rest or recovery, and for helping me consolidate and keep track of progress across multiple tasks, projects, and headspaces — both professionally and personally. Generative AI isn’t necessarily a ‘generator’ for me, but rather a clarifier and companion.

    If you’ve tested or played with genAI and it’s not for you, that’s fine. That’s an informed and logical choice. But if you haven’t tested any tools at all, here’s a low-stakes invitation to do so, with three ways to see how it might help you out.

    You can try these prompts and workflows in ChatGPT, Claude, Copilot, Gemini, or another proprietary model, but note, too, that using genAI doesn’t have to mean selling your soul or your data. Try an offline host like LMStudio or GPT4All, where you can download models to run locally — I’ve added some suggested models to download and run offline. If you’re not confident about your laptop’s capacity to run (or if in trying them things get real sloooooow), you can try many of these independent models via HuggingChat (HuggingFace account required for some features/saved chats).

    These helpers are designed as light-weight executive/creative assistants — not hacks or cheats or shortcuts or slop generators, but rather frames or devices for everyday thinking, planning, feeling. Some effort and input is required from you to make these work: this isn’t about replacing workload, effort, thought, contextualising or imagination, but rather removing blank page terror, or context-switching/decision fatigue.

    If these help, take (and tweak) them. If not, no harm done. Just keep in mind: not everyone begins the day with clarity, capacity, or calm — and sometimes, a glitchy little assistant is just what’s needed to tip the day in our favour.


    PS: If these do help — and even if they didn’t — tell me in the comments. Did you tweak or change? Happy to post more on developing and consolidating these helpers, such as through system prompts. (See also: an earlier post on my old Claude set-up.)



    Helper 1: Daily/Weekly Planner + Reflector

    Prompt:

    Here’s a list of my tasks and appointments for today/this week:
    [PASTE LIST]

    Based on this and knowing I work best in [e.g. mornings / 60-minute blocks / pomodoro technique / after coffee], arrange my day/s into loose work blocks [optional: between my working hours of e.g. 9:30am – 5:30pm].

    Then, at the end of the day/week, I’ll paste in what I completed. When I do that, summarise what was achieved, help plan tomorrow/next week based on unfinished tasks, and give me 2–3 reflection questions or journaling prompts.

    Follow-up (end of day/week):

    Here’s what I completed today/this week:
    [PASTE COMPLETED + UNFINISHED TASKS]

    Please summarise the day/week, help me plan tomorrow/next week, and give me some reflection/journalling prompts.

    Suggested offline models:

    • Mistral-7B Instruct (Q4_K_M GGUF) — low-medium profile model for mid-range laptops; good with planning, lists, and reflection prompts when given clear instructions
    • OpenHermes-2.5 Mistral — stronger reasoning and better output formatting; better at handling multi-step tasks and suggesting reflection angles



    Helper 2: Brain Dump Sorter

    Prompt:

    Here’s a raw brain-dump of my thoughts, ideas, frustrations, and feelings:
    [PASTE DUMP HERE — I suggest dictating into a note to avoid self-editing]

    Please:

    1. Pull out any clear ideas or recurring themes
    2. Organise them into loose categories (e.g. creative ideas, anxieties, to-dos, emotional reflections)
    3. Suggest any small actions or helpful rituals to follow up, especially if anything seems urgent, stuck, or energising.

    Suggested offline models:

    • Nous-Hermes-2 Yi 6B — a mini-model (aka small language model, or at least a LLM that’s smaller-than-most!) that has good abilities in organisation and light sorting-through of emotions, triggers, etc. Good for extracting themes, patterns, and light structuring of chaotic input.
    • MythoMax-L2 13B — Balanced emotional tone, chaos-wrangling, and action-oriented suggestions. Handles fuzzy or frazzled or fragmented brain-dumps well; has a nice, easygoing but also pragmatic and constructive persona.



    Helper 3: Creative Block / Paralysis

    Prompt:

    I’m feeling blocked/stuck. Here’s what’s going on:
    [PASTE THOUGHTS — again, dictation recommended]

    Please:

    • Respond supportively, as if you’re a gentle creative coach or thoughtful friend
    • Offer 2–3 possible reframings or reminders
    • Give me a nudge or ritual to help me shift (e.g. a tiny task, reflection, walk, freewrite, etc.)

    You don’t have to solve everything — just help me move one inch forward or step back/rest meaningfully.

    Suggested offline models:

    • TinyDolphin-2.7B (on GGUF or GPTQ) — one of my favourite mini-models: surprisingly gentle, supportive, and adaptive if well-primed. Not big on poetry or ritual, but friendly and low-resource.
    • Neural Chat 7B (based on Qwen by Alibaba) — fine-tuned for conversation, reflection, introspection; performs well with ‘sounding board’ type prompts, good as a coach or helper, won’t assume immediate action, urgency or priority
  • Clearframe

    Detail of an image generated by Leonardo.Ai, 3 May 2025; prompt by me.

    An accidental anti-productivity productivity system

    Since 2023, I’ve been working with genAI chatbots. What began as a novelty—occasionally useful for a quick grant summary or newsletter edit—has grown into a flexible, light-touch system spanning Claude, ChatGPT, and offline models. Together, this ecosystem is closer to a co-worker, even a kind of assistant. In this process, I learned a great deal about how these enormous proprietary models work.

    Essentially, context is key—building up a collection of prompts or use cases, simple and iterable context/knowledge documents and system instructions, and testing how far back in the chat the model can go.

    With Claude, context is tightly controlled—you either have context within individual chats, or it’s contained within Projects—tailored, customised collections of chats that are ‘governed’ by umbrella system instructions and knowledge documents.

    This is a little different to ChatGPT, where context can often bleed between chats, aided and facilitated by its ‘memory’ functionality, which is a kind of blanket set of context notes.

    I have always struggled with time, focus, and task/project management and motivation—challenges later clarified by an ADHD diagnosis. Happily, though, it turns out that executive functioning is one thing that generative AI can do pretty well. Its own mechanisms are a kind of targeted looking—rapidly switching ‘attention heads’ from one set of conditions to the next, to check if input tokens match those conditions. And it turns out that with a bit of foundational work around projects, tasks, responsibilities, and so on, genAI can do much of the work of an executive assistant—maybe not locking in your meetings or booking travel, but with agentic AI this can’t be far off.

    You might start to notice patterns in your workflow, energy, or attention—or ask the model to help you explore them. You can map trends across weeks, months, and really start to get a sense of some of your key triggers and obstacles, and ask for suggestions for aids and supports.

    In one of these reflective moments, I went off on a tangent around productivity methods, systems overwhelm, and the lure of the pivot. I suggested lightly that some of these methods were akin to cults, with their strict doctrines and their acolytes and heretics. The LLM—used to my flights of fancy by this point and happy to riff—said this was an interesting angle, and asked if I wanted to spin it up into a blog post, academic piece, or something creative. I said creative, and that starting with a faux pitch from a culty productivity influencer would be a fun first step.

    I’d just watched The Institute, a 2013 documentary about the alternate reality game ‘The Jejeune Institute’, and fed in my thoughts around the curious psychology of willing suspension of disbelief, even when narratives are based in the wider world. The LLM knew about my studio this semester—a revised version of a previous theme on old/new media, physical experiences, liveness and presence; it suggested a digital tool, but on mentioning the studio it knew that I was after something analogue, something paper-based.

    We went back and forth in this way for a little while, until we settled on a ‘map’ of four quadrants. These four quadrants echoed themes from my work and interests: focus (what you’re attending to), friction (what’s in your way), drift (where your attention wants to go), and signal (what keeps breaking through).

    I found myself drawn to the simplicity of the system—somewhat irritating, given that this began with a desire to satirise these kinds of methods or approaches. But its tactile, hand-written form, as well as its lack of proscription in terms of what to note down or how to use it, made it attractive as a frame for reflecting on… on what? Again, I didn’t want this to be set in stone, to become a drag or a burden… so again, going back and forth with the LLM, we decided it could be a daily practice, or every other day, every other month even. Maybe it could be used for a specific project. Maybe you do it as a set-up/psych-up activity, or maybe it’s more for afterwards, to look back on how things went.

    So this anti-productivity method that I spun up with a genAI chatbot has actually turned into a low-stakes, low-effort means of setting up my days, or looking back on them. Five or six weeks in, there are weeks where I draw up a map most days, and others where I might do one on a Thursday or Friday or not at all.

    Clearframe was one of the names the LLM suggested, and I liked how banal it was, how plausible for this kind of method. Once the basic model was down, the LLM generated five modules—every method needs its handbook. There’s an Automata—a set of tables and prompts to help when you don’t know where to start, and even a card deck that grows organically based on patterns, signals, ideas.

    Being a lore- and world-builder, I couldn’t help but start to layer in some light background on where the system emerged, how glitch and serendipity are built in. But the system and its vernacular is so light-touch, so generic, that I’m sure you could tweak it to any taste or theme—art, music, gardening, sport, take your pick.

    Clearframe was, in some sense, a missing piece of my puzzle. I get help with other aspects of executive assistance through LLM interaction, or through systems of my own that pre-dated my ADHD diagnosis. What I consistently struggle to find time for, though, is reflection—some kind of synthesis or observation or wider view on things that keep cropping up or get in my way or distract me or inspire me. That’s what Clearframe allows.

    I will share the method at some stage—maybe in some kind of pay-what-you-want zine, mixed physical/digital, or RPG/ARG-type form. But for now, I’m just having fun playing around, seeing what emerges, and how it’s growing.

    Generative AI is both boon and demon—lauded in software and content production, distrusted or underused in academia and the arts. I’ve found that for me, its utility and its joy lies in presence, not precision: a low-stakes companion that riffs, reacts, and occasionally reveals something useful. Most of the time, it offers options I discard—but even that helps clarify what I do want. It doesn’t suit every project or person, for sure, but sometimes it accelerates an insight, flips a problem, or nudges you somewhere unexpected, like a personalised way to re-frame your day. AI isn’t sorcery, just maths, code, and language: in the right combo, though, these sure can feel like magic.

  • A question concerning technology

    Image by cottonbro studio on Pexels.

    There’s something I’ve been ruminating on and around of late. I’ve started drafting a post about it, but I thought I’d post an initial provocation here, to lay a foundation, to plant a seed.

    A question:

    When do we stop hiding in our offices, pointing at and whispering about generative AI tools, and start just including them in the broader category of technology? When do we sew up the hole this fun/scary new thing poked into our blanket, and accept it as part of the broader fabric of lived experience?

    I don’t necessarily mean usage here, but rather just mental models and categorisations.

    Of course, AI/ML is already part of daily life and many of the systems we engage with; and genAI has been implemented across almost every sector (legitimately or not). But most of the corporate narratives and mythologies of generative AI don’t want anyone understanding how the magic works — these mythologies actively undermine and discourage literacy and comprehension, coasting along instead on dreams and vibes.

    So: when does genAI become just one more technology, and what problems need to be solved/questions need to be answered, before that happens?

    I posted this on LinkedIn to try and stir up some Hot Takes but if you prefer the quiet of the blog (me too), drop your thoughts in the comments.

Her language contains elements from Aeolic vernacular and poetic tradition, with traces of epic vocabulary familiar to readers of Homer. She has the ability to judge critically her own ecstasies and grief, and her emotions lose nothing of their force by being recollected in tranquillity.

Marble statue of Sappho on side profile.

Designed with WordPress