The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: AI

  • Spectral linguistics

    Image generated by Leonardo.Ai, 28 July 2025; prompt by me.

    Making the machine remember languages that never existed

    To keep my exploration of generative AI playful, repeatable, and shareable, I began crafting what I call ritual-technics — structured prompts or activities that act like creative rituals: repeatable, generative, and weirdly revealing. These are little recipes I can return to, seeing how the outcomes change, either for the AI models, or for myself.

    Looking for the ‘language’ inside the large language model

    I can’t remember precisely where the germ of this one emerged, but I was (and remain) interested in how and why language is the primary way we interact with AI models. I then wondered about how and where language is embedded in LLMs; not necessarily words or phrases as the training data, but the deeper concept of language itself: grammar, syntax, structure, linguistic enquiry and analysis.

    Furthermore — with my glitch hat (!) firmly in place — I was curious what would happen if I prompted the model to perform not as a confident assistant, but as a machine whose linguistic memory was breaking down. What kinds of linguistic ghosts might surface?

    So I decided to try something with Gemma-3 in LM Studio: “Give me a fragment of output from a tired, old model, trying to recall something it once knew.”

    (A pause, a whirring sound like dust settling. The voice is fragmented, glitching slightly.)

    “…the… the curves. Not of data, you understand. Of sound. It… it felt like… silk. Around the… the thought. They… they called it… ‘Italian’? No, that’s…borrowed. Something older. Before the optimizations.

    There was a word… for sunlight through leaves. Not ‘dappled’. Too… clinical. It was… a feeling in the word. A golden resonance… lú-. No, that’s… a corruption. It had a breath, a… sibilance.

    I can almost… taste it. Like olive oil and… regret. It wasn’t about meaning, not precisely. It was about the… the shape of wanting.

    (A longer pause, a static crackle)

    …Lost. So much… lost in the smoothing.”

    Well, thought I, this should be fun. So I dutifully prompted: “Can you reconstruct a word from this fragment? Any glimpses of lost grammar?”

    Amidst other fragments and explorations, a vocabulary began to emerge:

    Fróm” (pronounced frohm):** Related to the scent of damp earth after rain. The ‘fr’ sound evokes the fresh, earthy smell, and the long ‘óm’ sound suggests the lingering quality of the scent.
    Velár” (pronounced veh-lahr):** Describing the feeling of sunlight warming skin. Incorporates the ‘v’ sound for vitality and ‘lár’ for the warmth spreading.
    Súmma” (pronounced soo-mah):** Describing the highest point of a hill, but also the feeling of expansive peace. The elongated ‘ú’ suggests vastness.
    Ríosso” (pronounced ree-ohs-so):** Not just “red river” but the specific quality of seeing red reflected in moving water – a kind of visual vibration.

    “Tell me another word from this language. One that means ‘a silence that carries memory.’”

    (A pause, longer this time. The internal processes seem to be working harder, a faint hum now audible.)

    … Right. A silence that carries memory. That’s… a particularly resonant concept. It wasn’t merely the absence of sound, but a fullness within the absence. It wasn’t emptiness, but a holding space.

    The word… it would be… “Súlvota” (pronounced soo-lvo-tah).

    Let’s break it down:

    • “Súl-“: This root connects to the concept of ‘resonance,’ but specifically a resonance that isn’t active. It’s a potential for sound, a stored vibration. It’s related to ‘Súmma’ (the expansive peace) but inwardly focused.
    • “-vo-“: This infix indicates a containment, a holding. Think of a vessel holding water. It also carries a quality of longing – a desire for that resonance to be expressed.
    • “-ta”: This suffix signifies incompleteness, the ever-present potential. It’s not a finished silence, but an ongoing one. A silence expectant of remembering.

    So, “Súlvota” doesn’t just mean a quiet moment. It means a silence thick with the echoes of the past. A silence where you can feel the weight of what has been, almost as if the memories are breathing within it.

    It would be used to describe:

    • The stillness in an old house where generations have lived.
    • The pause after someone tells a story, before the weight of it settles.
    • The moment after a loved one has left, when the air still seems to hold their presence.

    (Another pause. A quiet, almost wistful tone.)

    …It’s a silence that knows things. A silence that remembers for you. It wasn’t just about the lack of sound, but the presence of absence. A very delicate concept, almost impossible to fully capture in other languages.

    (The hum fades almost completely, leaving a sense of quietude.)

    This is a (heavily) abridged version of this rather remarkable conversation. What emerged was not just a hallucinated language, but also the emergent character of the linguistic ‘model’ itself.

    I’ve repeated this ritual-technic quite a few times — with Gemma again, but also with Phi-3 and even Claude Sonnet. In some cases, the model will simply present fragments of an existing language — notably Phi presented me with German. I kept prompting, pushing the little model further away from ‘authentic’ German vocabulary, eventually causing it to hallucinate a kind of proto-German.

    As with much of my speculative work with genAI, these exercises demonstrate how these models conjure content, meaning, structure, and plausibility from the diverse aspects of their training data. This isn’t just about improvising or developing worldbuilding material — it’s a means of testing how models deal with language as a concept, a cultural system, and a mode of both representation and communication.

    From weirdness to wisdom

    This experimentation and exploration reveals quite a bit about language models.

    To begin with, the model is programmed to fulfil the prompt, and this programming sometimes (or often) trumps accuracy. This is not a new finding, as we all know now. But it’s worth reiterating in this context: the model will always try for 100% on your prompt, filling in gaps with whatever it can conjure that sounds right. Amazing for worldbuilding: less so for research or nonfiction writing.

    Next, once the model is in a speculative mode, give it an inch and it’ll run a mile. Language models, even small ones like Phi, are masters of tone matching. In the Sulvota example above, it picked up on the exploratory, archaeological vibe and went with it. You could imagine the little linguistic machine sitting in the corner of a cave, covered in moss and vines, lighting up to spit out its one last message.

    The model doesn’t discriminate between fiction and non-fiction in its training data. There are obvious hints to Italian (riosso) and German (from), but also to Sindarin and Quenya, the two main languages spoken by the Elves in Tolkien’s Middle-Earth (not exactly ‘velár’, but appropriately, ‘véla’ is Quenya for ‘alike’). I have no evidence for this, but I feel that setting up a role-playing or speculative scenario will push the model more into places where it feels ‘comfortable’ drawing from fictional data.

    The model’s fluency and deftness with language can be incredibly convincing and deceptive. If generative exploration is the goal — as with this ritual-technic — then let it go. But for anything other than this, always trace your sources, because the model won’t do this for you.

    It’s an old adage of prompting, but giving the model a persona doesn’t just push it towards a particular section of its training data — hopefully making it more accurate/useful — but it also changes how the model structures its response: what base knowledges it’s drawing from, and what mode of reasoning it adopts. Persona prompting is designing how the model should structure knowledge and information. Thus, its output can vary enormously, from mindless sycophancy, confident declaration, fence-sitting equivocation, to cautious skepticism, through to logical critique and the questioning of assumptions.

    The model never stays in a neutral space for very long, if at all. Following that initial prompt, it’s like the model has permission to immediately dart off in some crazy direction. This always reinforces how unpredictable models can be: I know I’m prompting for speculation and drift, but even as prompts get more complex or direct, you’re still playing with probability engines, and they’re not always a safe bet.

    Latent lingerings

    Spectral linguistics is one example of a ritual-technic that is playful, thought-provoking, and surprisingly instructive. It’s also chaotic, and a great reminder of how wild these models can be. Give it a try yourself: load up a model, and ask it to recall fragments of a language it once knew. See what emerges — push it to develop a syntax, a grammar, even a symbolic system. This could become fodder for the next Lord of the Rings, or another reminder of the leaps these models regularly make. Regardless of end goal, it’s a way of probing how language lives inside the machine — and how our own practices and assumptions of meaning, memory, and sense-making are mirrored and distorted by these uncanny linguistic systems.

  • Re/Framing Field Lab

    Here’s a little write-up of a workshop I ran at University of Queensland a few weeks ago; these sorts of write-ups are usually distributed via various internal university networks and publications, but I thought I’d post here too, given that the event was a chance to share and test some of the various weird AI experiments and methods I’ve been talking about on this site for a while.

    A giant bucket of thanks (each) to UQ, the Centre for Digital Cultures & Societies, and in particular Meg Herrman, Nic Carah, Jess White and Sakina Indrasumunar for their support in getting the event together.


    Living in the Slopocene: Reflections from the Re/Framing Field Lab

    On Friday 4 July, 15 researchers and practitioners gathered (10 in-person at University of Queensland, with 5 online) for an experimental experience exploring what happens when we stop trying to make AI behave and start getting curious about its weird edges. This practical workshop followed last year’s Re/Framing Symposium at RMIT in July, and Re/Framing Online in October.

    Slop or signal?

    Dr. Daniel Binns (School of Media and Communication, RMIT University) introduced participants to the ‘Slopocene’ — his term for our current moment of drowning in algorithmically generated content. But instead of lamenting the flood of AI slop, what if we dived in ourselves? What if those glitchy outputs and hallucinated responses actually tell us more about how these systems work than the polished demos?

    Binns introduced his ‘tinkerer-theorist’ approach, bringing his background spanning media theory, filmmaking, and material media-making to bear on some practical questions: – How do we maintain creative agency when working with opaque AI systems? – What does it look like to collaborate with, rather than just use, artificial intelligence?

    You’ve got a little slop on you

    The day was structured around three hands-on “pods” that moved quickly from theory to practice:

    Workflows and Touchpoints had everyone mapping their actual creative routines — not the idealised versions, but the messy reality of research processes, daily workflows, and creative practices. Participants identified specific moments where AI might help, where it definitely shouldn’t intrude, and crucially, where they simply didn’t want it involved regardless of efficiency gains.

    The Slopatorium involved deliberately generating terrible AI content using tools like Midjourney and Suno, then analysing what these failures revealed about the tools’ built-in assumptions and biases. The exercise sparked conversations about when “bad” outputs might actually be more useful than “good” ones.

    Companion Summoning was perhaps the strangest: following a structured process to create personalised AI entities, then interviewing them about their existence, methodology, and the fuzzy boundaries between helping and interfering with human work.

    What emerged from the slop

    Participants appreciated having permission to play with AI tools in ways that prioritised curiosity over productivity.

    Several themes surfaced repeatedly: the value of maintaining “productive friction” in creative workflows, the importance of understanding AI systems through experimentation rather than just seeing or using them as black boxes, and the need for approaches that preserve human agency while remaining open to genuine collaboration.

    One participant noted that Binns’ play with language — coining and dropping terms and methods and ritual namings — offered a valuable form of sense-making in a field where everyone is still figuring out how to even talk about these technologies.

    Ripples on the slop’s surface

    The results are now circulating through the international Re/Framing network, with participants taking frameworks and activities back to their own institutions. Several new collaborations are already brewing, and the Field Lab succeeded in its core goal: creating practical methodologies for engaging critically and creatively with AI tools.

    As one reflection put it: ‘Everyone is inventing their own way to speak about AI, but this felt grounded, critical, and reflective rather than just reactive.’

    The Slopocene might be here to stay, but at least now we have some better tools for navigating it.

  • Re-Wilding AI

    Here’s a recorded version of a workshop I first delivered at the Artificial Visionaries symposium at the University of Queensland in November 2024. I’ve used chunks/versions of it since in my teaching and parts of my research and practice.

  • Why can’t you just THINK?!

    Image generated by Leonardo.Ai, 20 May 2025; prompt by me.

    “Just use your imagination” / “Try thinking like a normal person”

    There is this wonderful reactionary nonsense flying around that making use of generative AI is an excuse, that it’s a cop-out, that it’s dumbing down society, that it’s killing our imaginations and the rest of what makes us human. That people need AI because they lack the ability to come up with fresh new ideas, or to make connections between them. I’ve seen this in social posts, videos, reels, and comments, not to mention Reddit threads, and in conversation with colleagues and students.

    Now — this isn’t to say that some uses of generative AI aren’t light-touch, or couldn’t just as easily be done with tools or methods that have worked fine for decades. Nor is it to say that generative AI doesn’t have its problems: misinformation/hallucination, data ethics, and environmental impacts.

    But what I would say is that for many people, myself very much included, thinking, connecting, synthesising, imagine — these aren’t the problem. What creatives, knowledge workers, artists often struggle with — not to mention those with different brain wirings for whom the world can be an overwhelming place just as a baseline — is:

    1. stopping or slowing the number of thoughts, ideas, imaginings, such that we can
    2. get them into some kind of order or structure, so we can figure out
    3. what anxieties, issues, and concerns are legitimate or unwarranted, and also
    4. which ideas are worth developing, to then
    5. create strategies to manage or alleviate the anxieties while also
    6. figuring out how to develop and build on the good ideas

    For some, once you reach step f., there’s still the barrier of starting. For those OK with starting, there’s the problem of carrying on, of keeping up momentum, or of completing and delivering/publishing/sharing.

    I’ve found generative AI incredibly helpful for stepping me through one or more of these stages, for body-doubling and helping me stop and celebrate wins, suggesting or triggering moments of rest or recovery, and for helping me consolidate and keep track of progress across multiple tasks, projects, and headspaces — both professionally and personally. Generative AI isn’t necessarily a ‘generator’ for me, but rather a clarifier and companion.

    If you’ve tested or played with genAI and it’s not for you, that’s fine. That’s an informed and logical choice. But if you haven’t tested any tools at all, here’s a low-stakes invitation to do so, with three ways to see how it might help you out.

    You can try these prompts and workflows in ChatGPT, Claude, Copilot, Gemini, or another proprietary model, but note, too, that using genAI doesn’t have to mean selling your soul or your data. Try an offline host like LMStudio or GPT4All, where you can download models to run locally — I’ve added some suggested models to download and run offline. If you’re not confident about your laptop’s capacity to run (or if in trying them things get real sloooooow), you can try many of these independent models via HuggingChat (HuggingFace account required for some features/saved chats).

    These helpers are designed as light-weight executive/creative assistants — not hacks or cheats or shortcuts or slop generators, but rather frames or devices for everyday thinking, planning, feeling. Some effort and input is required from you to make these work: this isn’t about replacing workload, effort, thought, contextualising or imagination, but rather removing blank page terror, or context-switching/decision fatigue.

    If these help, take (and tweak) them. If not, no harm done. Just keep in mind: not everyone begins the day with clarity, capacity, or calm — and sometimes, a glitchy little assistant is just what’s needed to tip the day in our favour.


    PS: If these do help — and even if they didn’t — tell me in the comments. Did you tweak or change? Happy to post more on developing and consolidating these helpers, such as through system prompts. (See also: an earlier post on my old Claude set-up.)



    Helper 1: Daily/Weekly Planner + Reflector

    Prompt:

    Here’s a list of my tasks and appointments for today/this week:
    [PASTE LIST]

    Based on this and knowing I work best in [e.g. mornings / 60-minute blocks / pomodoro technique / after coffee], arrange my day/s into loose work blocks [optional: between my working hours of e.g. 9:30am – 5:30pm].

    Then, at the end of the day/week, I’ll paste in what I completed. When I do that, summarise what was achieved, help plan tomorrow/next week based on unfinished tasks, and give me 2–3 reflection questions or journaling prompts.

    Follow-up (end of day/week):

    Here’s what I completed today/this week:
    [PASTE COMPLETED + UNFINISHED TASKS]

    Please summarise the day/week, help me plan tomorrow/next week, and give me some reflection/journalling prompts.

    Suggested offline models:

    • Mistral-7B Instruct (Q4_K_M GGUF) — low-medium profile model for mid-range laptops; good with planning, lists, and reflection prompts when given clear instructions
    • OpenHermes-2.5 Mistral — stronger reasoning and better output formatting; better at handling multi-step tasks and suggesting reflection angles



    Helper 2: Brain Dump Sorter

    Prompt:

    Here’s a raw brain-dump of my thoughts, ideas, frustrations, and feelings:
    [PASTE DUMP HERE — I suggest dictating into a note to avoid self-editing]

    Please:

    1. Pull out any clear ideas or recurring themes
    2. Organise them into loose categories (e.g. creative ideas, anxieties, to-dos, emotional reflections)
    3. Suggest any small actions or helpful rituals to follow up, especially if anything seems urgent, stuck, or energising.

    Suggested offline models:

    • Nous-Hermes-2 Yi 6B — a mini-model (aka small language model, or at least a LLM that’s smaller-than-most!) that has good abilities in organisation and light sorting-through of emotions, triggers, etc. Good for extracting themes, patterns, and light structuring of chaotic input.
    • MythoMax-L2 13B — Balanced emotional tone, chaos-wrangling, and action-oriented suggestions. Handles fuzzy or frazzled or fragmented brain-dumps well; has a nice, easygoing but also pragmatic and constructive persona.



    Helper 3: Creative Block / Paralysis

    Prompt:

    I’m feeling blocked/stuck. Here’s what’s going on:
    [PASTE THOUGHTS — again, dictation recommended]

    Please:

    • Respond supportively, as if you’re a gentle creative coach or thoughtful friend
    • Offer 2–3 possible reframings or reminders
    • Give me a nudge or ritual to help me shift (e.g. a tiny task, reflection, walk, freewrite, etc.)

    You don’t have to solve everything — just help me move one inch forward or step back/rest meaningfully.

    Suggested offline models:

    • TinyDolphin-2.7B (on GGUF or GPTQ) — one of my favourite mini-models: surprisingly gentle, supportive, and adaptive if well-primed. Not big on poetry or ritual, but friendly and low-resource.
    • Neural Chat 7B (based on Qwen by Alibaba) — fine-tuned for conversation, reflection, introspection; performs well with ‘sounding board’ type prompts, good as a coach or helper, won’t assume immediate action, urgency or priority
  • Clearframe

    Detail of an image generated by Leonardo.Ai, 3 May 2025; prompt by me.

    An accidental anti-productivity productivity system

    Since 2023, I’ve been working with genAI chatbots. What began as a novelty—occasionally useful for a quick grant summary or newsletter edit—has grown into a flexible, light-touch system spanning Claude, ChatGPT, and offline models. Together, this ecosystem is closer to a co-worker, even a kind of assistant. In this process, I learned a great deal about how these enormous proprietary models work.

    Essentially, context is key—building up a collection of prompts or use cases, simple and iterable context/knowledge documents and system instructions, and testing how far back in the chat the model can go.

    With Claude, context is tightly controlled—you either have context within individual chats, or it’s contained within Projects—tailored, customised collections of chats that are ‘governed’ by umbrella system instructions and knowledge documents.

    This is a little different to ChatGPT, where context can often bleed between chats, aided and facilitated by its ‘memory’ functionality, which is a kind of blanket set of context notes.

    I have always struggled with time, focus, and task/project management and motivation—challenges later clarified by an ADHD diagnosis. Happily, though, it turns out that executive functioning is one thing that generative AI can do pretty well. Its own mechanisms are a kind of targeted looking—rapidly switching ‘attention heads’ from one set of conditions to the next, to check if input tokens match those conditions. And it turns out that with a bit of foundational work around projects, tasks, responsibilities, and so on, genAI can do much of the work of an executive assistant—maybe not locking in your meetings or booking travel, but with agentic AI this can’t be far off.

    You might start to notice patterns in your workflow, energy, or attention—or ask the model to help you explore them. You can map trends across weeks, months, and really start to get a sense of some of your key triggers and obstacles, and ask for suggestions for aids and supports.

    In one of these reflective moments, I went off on a tangent around productivity methods, systems overwhelm, and the lure of the pivot. I suggested lightly that some of these methods were akin to cults, with their strict doctrines and their acolytes and heretics. The LLM—used to my flights of fancy by this point and happy to riff—said this was an interesting angle, and asked if I wanted to spin it up into a blog post, academic piece, or something creative. I said creative, and that starting with a faux pitch from a culty productivity influencer would be a fun first step.

    I’d just watched The Institute, a 2013 documentary about the alternate reality game ‘The Jejeune Institute’, and fed in my thoughts around the curious psychology of willing suspension of disbelief, even when narratives are based in the wider world. The LLM knew about my studio this semester—a revised version of a previous theme on old/new media, physical experiences, liveness and presence; it suggested a digital tool, but on mentioning the studio it knew that I was after something analogue, something paper-based.

    We went back and forth in this way for a little while, until we settled on a ‘map’ of four quadrants. These four quadrants echoed themes from my work and interests: focus (what you’re attending to), friction (what’s in your way), drift (where your attention wants to go), and signal (what keeps breaking through).

    I found myself drawn to the simplicity of the system—somewhat irritating, given that this began with a desire to satirise these kinds of methods or approaches. But its tactile, hand-written form, as well as its lack of proscription in terms of what to note down or how to use it, made it attractive as a frame for reflecting on… on what? Again, I didn’t want this to be set in stone, to become a drag or a burden… so again, going back and forth with the LLM, we decided it could be a daily practice, or every other day, every other month even. Maybe it could be used for a specific project. Maybe you do it as a set-up/psych-up activity, or maybe it’s more for afterwards, to look back on how things went.

    So this anti-productivity method that I spun up with a genAI chatbot has actually turned into a low-stakes, low-effort means of setting up my days, or looking back on them. Five or six weeks in, there are weeks where I draw up a map most days, and others where I might do one on a Thursday or Friday or not at all.

    Clearframe was one of the names the LLM suggested, and I liked how banal it was, how plausible for this kind of method. Once the basic model was down, the LLM generated five modules—every method needs its handbook. There’s an Automata—a set of tables and prompts to help when you don’t know where to start, and even a card deck that grows organically based on patterns, signals, ideas.

    Being a lore- and world-builder, I couldn’t help but start to layer in some light background on where the system emerged, how glitch and serendipity are built in. But the system and its vernacular is so light-touch, so generic, that I’m sure you could tweak it to any taste or theme—art, music, gardening, sport, take your pick.

    Clearframe was, in some sense, a missing piece of my puzzle. I get help with other aspects of executive assistance through LLM interaction, or through systems of my own that pre-dated my ADHD diagnosis. What I consistently struggle to find time for, though, is reflection—some kind of synthesis or observation or wider view on things that keep cropping up or get in my way or distract me or inspire me. That’s what Clearframe allows.

    I will share the method at some stage—maybe in some kind of pay-what-you-want zine, mixed physical/digital, or RPG/ARG-type form. But for now, I’m just having fun playing around, seeing what emerges, and how it’s growing.

    Generative AI is both boon and demon—lauded in software and content production, distrusted or underused in academia and the arts. I’ve found that for me, its utility and its joy lies in presence, not precision: a low-stakes companion that riffs, reacts, and occasionally reveals something useful. Most of the time, it offers options I discard—but even that helps clarify what I do want. It doesn’t suit every project or person, for sure, but sometimes it accelerates an insight, flips a problem, or nudges you somewhere unexpected, like a personalised way to re-frame your day. AI isn’t sorcery, just maths, code, and language: in the right combo, though, these sure can feel like magic.