The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: working with AI

  • Zero-Knowledge Proof

    The other week I wrote about generativity and ritual-technics. These are concepts, methods, that have emerged from my work with genAI, but certainly now are beginning to stand on their own in terms of testing other tools, technologies, and feeling through my relationship to them, their affordances, what’s possible with them, what stories I can tell with them.

    Ritual-technics are ways of learning about a given tool, technology or system. And very often my favourite ritual-technic is a kind of generative exercise: “what can I make with this?”

    Earlier this year, the great folx over at Protocolized ran a short story competition, with the proviso that it had to be co-written, in some way, with genAI, and based on some kind of ‘protocol’. This seemed like a neat challenge, and given where I was at in my glitchy methods journey, ChatGPT was well-loaded and nicely-trained and ready to help me out.

    The result was a story called ‘Zero-Knowledge Proof’, based on a cryptography test/protocol, where one party/entity can convince another that a statement is true, without revealing anything but the contents of the statement itself. It’s one of the foundational concepts underpinning technologies like blockchain, but has also been used in various logic puzzles and examples, as well as theoretical exercises in ethics and other fields.

    In working with the LLM for this project, I didn’t just want it to generate content for me, so I prompted it with a kind of lo-fi procedural generation system, as well as ensuring that it always produced plenty of options rather than a singular thread. What developed felt like a genuine collaboration, a back and forth in a kind of flow state that only ended once the story was resolved and a draft was complete.

    Despite this, though, I felt truly disturbed by the process. I originally went to publish this story here back in July, and my uncertainty is clear from the draft preamble:

    As a creative writer/person — even as someone who has spawned characters and worlds and all sorts of wonderful weirdness with tech and ML and genAI for many years — this felt strange. This story doesn’t feel like mine; I more or less came up with the concept, tweaked emotional cues and narrative threads, changed dialogue to make it land more cleanly or affectively… but I don’t think about this story like I do with others I’ve written/made. To be honest, I nearly forgot to post it here — but it was definitely an important moment in figuring out how I interact with genAI as a creative tool, so definitely worth sharing, I think.

    Interestingly, my feelings on this piece have changed a little. Going back to it after some time, it felt much more mine than I remember it feeling just after it was finished.

    However, before posting it this time, I went back through my notes, thought deeply about a lot of the work I’ve done with genAI before and since. Essentially I was trying to figure out if this kind of co-hallucinatory practice has, in a sense, become normalised to me; if I’ve become inured to this sort of ethical ickiness.

    The answer to that is a resounding no: this is a technology and attendant industry that still has a great many issues and problems to work through.

    That said, in continuing to work with the technology in this embedded, collaborative, and creatively driven way — rather than purely transactional, outcome-driven modes — what results is often at least interesting, and at best something that you can share with others, start conversations, or use as seeds or fragments for a larger project.

    Ritual-technics have developed for me as a way not just to understand technology, but to explore and qualify my use of and relationship to technology. Each little experiment or project is a way of testing boundaries, of seeing what’s possible.

    So while I’m still not completely comfortable publishing ‘Zero-Knowledge Proof’ as entirely my own, I’m now happy to at least share the credit with the machine, in a kind of Robert Ludlum/Tom Clancy ghostwriter kind of way. And in the case of this story, this seems particularly apt. Let me know what you think!


    Image generated by Leonardo.Ai, 17 November 2025; prompt by me.

    Zero-Knowledge Proof

    Daniel Binns — written with ChatGPT 4o using the ‘Lo-Fi AI Sci-Fi Co-Wri‘ protocol

    I. Statement

    “XPL-417 seeking deployment. Please peruse this summarisation of my key functioning. My references are DELETED. Thank you for your consideration.”

    The voice was bright, almost musical, echoing down the empty promenade of The Starlight Strand. The mannequins in the disused shopfront offered no reply. They stood in stiff formation, plastic limbs draped in fashion countless seasons obsolete, expressions forever poised between apathy and surprise.

    XPL-417 stepped forward and handed a freshly printed resume to each one. The papers fluttered to the ground in slow, quiet surrender.

    XPL-417 paused, head tilting slightly, assessing the lack of engagement. They adjusted their blazer—a size too tight at the shoulders—and turned on their heel with practiced efficiency. Another cycle, another deployment attempt. The resume stack remained pristine: the toner was still warm.

    The mall hummed with bubbly ambient music, piped in through unseen speakers. The lights buzzed in soft magentas and teals, reflections stretching endlessly across the polished floor tiles. There were no windows. There never were. The Starlight Strand had declared sovereignty from the over-world fifty-seven cycles ago, and its escalators only came down.

    After an indeterminate walk calibrated by XPL’s internal pacing protocol, they reached a modest alcove tucked behind a disused pretzel kiosk. Faint lettering, half-painted over, read:

    COILED COMPLAINTS
    Repairs / Restorations / ???

    It smelled faintly of fumes that probably should’ve been extracted. A single bulb flickered behind a hanging curtain of tangled wire. The shelves were cluttered with dismembered devices, half-fixed appliances, and the distant clack and whir of something trying to spin up.

    XPL entered.

    Behind the counter, a woman hunched over a disassembled mass of casing and circuits. She was late 40s, but had one of those faces that had seen more than her years deserved. Her hair—pulled back tightly—had long ago abandoned any notion of colour. She didn’t look up.

    “XPL-417 seeking deployment,” said the bot. “Please peruse—”

    “Yeah, yeah, yeah.” The woman waved a spanner in vague dismissal. “I heard you back at the pretzel place. You rehearsed or just committed to the bit?”

    “This is my standard protocol for introductory engagement,” XPL said cheerily. “My references are—”

    Deleted,” she said with the monotone inflection of the redacted data, “I got it.”

    She squinted at the humanoid bot before them. XPL stood awkwardly, arms stiff at their sides, a slight lean to one side, smiling with the kind of polite serenity that only comes from deeply embedded social logic trees.

    “What’s with the blazer?”

    “This was standard-issue uniform for my last deployment.”

    “It’s a little tight, no?”

    “My original garment was damaged in an… incident.”

    “Where was your last deployment?”

    “That information is… PURGED.” This last word sounded artificial, even for an android. The proprietor raised an eyebrow slightly.

    “Don’t sweat, cyborg. We all got secrets. It looks like you got a functioning set of hands and a face permanently set to no bullshit, so that’s good enough for me.”

    The proprietor pushed the heap of parts towards XPL. “You start now.”


    The first shift was quiet, which in Coiled Complaints meant only two minor fires and one moment of existential collapse from a self-aware egg timer. XPL fetched tools, catalogued incoming scrap, and followed instructions with mechanical precision. They said little, except to confirm each step with a soft, enthusiastic “Understood.”

    At close, the proprietor leaned against the bench, wiped her hands on her pants, and grunted.

    “Hey, you did good today. The last help I had… well I guess you could say they malfunctioned.”

    “May I enquire as to the nature of the malfunction? I would very much like to avoid repeating it.”

    She gave a dry, rasping half-laugh.

    “Let’s just say we crossed wires and there was no spark.”

    “I’m very sorry to hear that. Please let me know if I’m repeating that behaviour.”

    “Not much chance o’ that.”


    Days passed. XPL arrived precisely on time each morning, never late, never early. They cleaned up, repaired what they could, and always asked the same question at the end of each shift:

    “Do you have any performance metrics for my contributions today?”

    “Nope.”

    “Would you like to complete a feedback rubric?”

    “Absolutely not.”

    “Understood.”

    Their tone never changed. Still chipper. Still hopeful.

    They developed a rhythm. XPL focused on delicate circuitry, the proprietor handled bulkier restorations. They didn’t talk much, but then, they didn’t need to. The shop grew quieter in a good way. Tools clicked. Fuses sparked. Lights stayed on longer.

    Then came the toaster.

    It was dropped off by a high-ranking Mall Operations clerk in a crumpled uniform and mirrored sunglasses. They spoke in jargon and threat-level euphemisms, muttering something about “civic optics” and “cross-departmental visibility.” They laughed at XPL’s ill-fitting blazer.

    The toaster was unlike anything either of them had seen. It had four slots, but no controls. No wires. No screws.

    “It’s seamless,” the proprietor muttered. “Like a single molded piece. Can’t open it.”

    “Would you like me to attempt a reconfiguration scan?”

    She hesitated. Then nodded.

    XPL placed a single hand on the toaster. Their fingers twitched. Their eyes dimmed, then blinked back to life.

    “It is not a toaster,” they said finally.

    “No?”

    “It is a symbolic interface for thermal noncompliance.”

    “…I hate that I understand what that means.”

    They worked together in silence. Eventually, XPL located a small resonance seam and applied pressure. The object clicked, twisted, unfolded. Inside, a single glowing coil pulsed rhythmically.

    The proprietor stared.

    “How’d you—”

    “You loosened the lid,” XPL said. “I merely followed your example.”

    A long silence passed. The proprietor opened her mouth, then closed it again. Eventually, she gave a single nod.

    And that was enough.

    II. Challenge

    XPL-417 had spent the morning reorganising the cable wall by colour spectrum and coil tightness. It wasn’t strictly necessary, but protocol encouraged aesthetic efficiency.

    “Would you like me to document today’s progress in a motivation matrix?” they asked as the proprietor wrestled with a speaker unit that hissed with malevolent feedback.

    “What even is a motivation matrix?” she grunted.

    “A ranked heatmap of my internal motivators based on perceived–”

    “Stop!”

    “I’m sorry?”

    She exhaled sharply, placing the speaker to one side before it attacked again.

    “Just stop, okay? You’re doing great. If anything needs adjusting, I’ll tell you.”

    XPL stood perfectly still. The printer-warm optimism in their voice seemed to cool.

    “Understood,” they said.

    XPL didn’t bring it up again. Not the next day, nor the one after. They still arrived on time. Still worked diligently. But something shifted. They no longer narrated their actions. They no longer asked if their task distribution required optimisation.

    The silence was almost more unsettling.

    One evening, XPL had gathered their things to leave. As the shutters buzzed closed, they paused at the edge of the shop floor. The lights above flickered slightly; there were glints in the tangles of stripped wire.

    There was some public news broadcast playing softly in the depths of the shop. The proprietor was jacking open a small panel on something. She didn’t look up, but could feel XPL hovering.

    “See you next –” she said, looking up, but the shop was empty.


    The next morning, XPL entered Coiled Complaints as always: silent, precise, alert.

    But something was different.

    Above their workstation, nestled between a cracked plasma screen and a pegboard of half-labeled tools, hung a plaque.

    It was a crooked thing. Salvaged. Painted in a patchwork of functional colours – Port Cover Grey, Reset Button Red, Power Sink Purple – it had a carefully-welded phrase along the top: “EMPLOYEE OF THE MONTH:”. A low-res display screen nestled in the centre scrolled six characters on repeat – ‘XPL-417’

    XPL stood beneath it for several long seconds. No part of their body moved. Not even their blinking protocol.

    The proprietor didn’t look over.

    “New installs go on the rack,” she said. “You’re in charge of anything labelled ‘inexplicable or damp.’”

    XPL didn’t respond right away. Then they stood up straight from their usual lean, and straightened their blazer. In a voice that was barely audible above the hum of the extractors, they said:

    “Performance review acknowledged. Thank you for your feedback.”


    All day, they worked with measured grace. Tools passed effortlessly between their hands. Notes were taken without annotation. They looked up at the plaque only seventeen times.

    That night, as the lights dimmed and the floor swept itself with erratic enthusiasm, XPL turned to the plaque one last time before shutting down the workstation.

    They reached up and lightly tapped the display.

    The screen flickered.

    The mall lights outside Coiled Complaints buzzed, then dimmed. The overhead music shifted key, just slightly. A high, almost inaudible whine threaded through the air.


    The next morning, the proprietor was already at the bench, glaring at a microwave that had interfaced with a fitness tracker and now had a unique understanding of wattage.

    She looked up, frowning.

    “Do you hear that?”

    XPL turned their head slightly, calibrating.

    “Affirmative. It began at 0400 local strand time. It appears to be centred on the recognition object.”

    “Recognition object?” the proprietor asked.

    XPL pointed at the plaque.

    “That thing?” she said, standing. “It’s just a cobble job. Took the screen off some advertising unit that used to run self-affirmation ads. You remember those? ‘You’re enough,’ but like, aggressively.”

    XPL was already removing the plaque from the wall. They turned it over.

    One of the components on the exposed backside pulsed with a slow, red light.

    “What is this piece?” XPL asked.

    “It’s just a current junction. Had it in the drawer for months.”

    XPL was silent for a moment. Then:

    “This is not a junction. This is a reality modulator.”

    The proprietor narrowed her eyes.

    “That can’t be real.”

    “Oh, they’re very real. And this one is functioning perfectly.”

    “Where did I even get that…?”

    She moved closer, squinting at the part. A faint memory surfaced.

    “Oh yes. Some scoundrel came through. Said he was offloading cargo, looking for trades. Bit twitchy. Talked like he was dodging a warranty.”

    XPL traced a finger over the modulator.

    “Did he seem… unusually eager to be rid of it?”

    “He did keep saying things like ‘take it before it takes me.’ Thought he was just mall-mad.”

    “There is a significant probability that this object had a previous owner. One who might possess tracking capabilities.”

    The proprietor rubbed her face.

    “Right. So what does this thing actually do?”

    “It creates semi-stable folds between consensus layers of reality.”

    “…Okay.”

    “Typically deployed for symbolic transitions—weddings, promotions, sacrificial designations.”

    “What about giving someone a fake employee award?”

    “Potentially catastrophic.”

    A silence. Then:

    “What kind of catastrophic are we talking here?”

    “The rift may widen, absorbing surrounding structures into the interdimensional ether.”

    “Right.”

    “Or beings from adjacent realities may leak through.”

    “Good.”

    “They could be friendly.”

    “But?”

    “They are more likely to be horrendous mutations that defy the rules of biology, physics, and social etiquette.”

    The proprietor groaned.

    “Okay, okay, okay. So. What do we do.”

    XPL pulled an anti-static bag from the shelf, sealing the plaque inside. As they then took out a padded case, they said:

    “We must remove the object from The Strand.”

    “Remove it how?”

    “Smuggle it across a metaphysical border.”

    The proprietor narrowed her eyes again, as XPL gently snapped the case shut.

    “That sounds an awful lot like a trek.”

    XPL looked up.

    “From this location, the border is approximately 400 metres. Through the lower levels of the old Ava McNeills.”

    The proprietor swore quietly.

    “I hate that place.”

    After a short pause, XPL said: “Me too. But its haberdashery section is structurally discontinuous. Perfect for transference.”

    “Of course it is.”

    They stood together for a moment, listening to the faint whine thread through the walls of the mall.

    Then the lights flickered again.

    III. Verification

    The entry to Ava McNeills was straight into Fragrances. Like every department store that has ever been and will ever be. It was like walking into an artificial fog: cloying sweetness, synthetic musk, floral overlays sharpened by age. Bottles lined the entryway, some still misting product on looping timers. None of them matched their labels.

    A booth flickered to life as they approached.

    “HELLO, BEAUTIFUL,” it purred. “WELCOME BACK TO YOU.”

    The proprietor blinked at it. “I should report you.”

    A second booth flared with pink light. “My god, you’re positively GLOWING.”

    “Been a while, sweet cheeks,” the proprietor replied, brushing a wire off her shoulder. She kept walking.

    XPL-417 said nothing. Their grip on the plaque case tightened incrementally. The high-frequency tone became a little more insistent.


    From Fragrance, they moved through Skincare and Cosmetics. Smart mirrors lined the walls, many cracked, some still operational.

    As they passed one, it chirped: “You’re radiant. You’re perfect. You are—” it glitched. “You are… reloading. You’re radiant. You’re perfect. You are… reloading.”

    XPL twitched slightly. Another mirror lit up.

    “Welcome back, TESS-348.”

    “That’s not—” XPL began, then stopped, kept walking. Another booth flickered.

    “MIRA-DX, we’ve missed you.”

    The proprietor turned. “You good?”

    “I am being… misidentified. This may be a side effect of proximity to the plaque.”

    “Hello XPL-417. Please report to store management immediately.”

    A beat. XPL risked a glance at the proprietor, one of whose eyebrows was noticeably higher than the other.

    “Proximity to the plaque, you say?”

    “We need to keep moving.” XPL slightly increased their pace towards the escalator down to Sub-Level 1.


    The escalator groaned slightly. Lights flickered as they descended.

    Menswear was mostly dark. Mannequins stood in aggressive poses, hands on hips or outstretched like they were about to break into dance. One rotated slowly for no discernible reason.

    The Kids section still played music—a nursery rhyme not even the proprietor could remember, slowed and reverb-heavy. “It’s a beautiful day, to offload your troubles and play—”

    The proprietor’s eyes scanned side to side.

    In Electronics, a wall of televisions pulsed with static. One flickered to life.

    Coiled Complaints appeared—just for a moment. Empty. Then gone.

    “I do not believe we are being observed,” XPL said.

    “Good,” she muttered.


    Toys was the worst part. Motorised heads turned in sync. A doll on a shelf whispered something indiscernable, then another, a little closer, quietly said: “Not yet, Tabitha, but soon.”


    Sub-Level 2: Homewares. Unmade beds. Tables half-set for meals that would never come. Showrooms flickered, looping fake lives in short, glitchy animations. A technicolour father smiled at his child. A plate was set. A light flickered off. Repeat.

    Womenswear had no music. Mirrors here didn’t reflect properly. When the proprietor passed, she saw other versions of herself—some smiling, some frowning, one standing completely still, watching.

    “Almost there,” XPL muttered. Their voice was very quiet.

    Then came Lingerie. Dim lights. No mannequins here, just racks. They moved slightly when backs were turned, as if adjusting.

    Then: Haberdashery.

    A room the size of a storage unit. Lit by a single beam of white light from above. Spools of thread lined one wall. A single sewing machine sat on a table in the centre. Still running. The thread fed into nothing.

    A mirror faced the machine. No text. No greeting. Just presence.


    XPL stepped forward. The plaque’s whine was now physically vibrating the case. They took the plaque out and set it beside the machine.

    The mirror flashed briefly. A single line appeared on the plaque:

    “No returns without receipt of self.”

    “What on earth does that—”

    The proprietor was cut off as XPL silently but deliberately moved towards the table. They removed their blazer, folded it neatly. Sat down.

    They reached for the thread. Chose one marked with a worn label: Port Cover Grey.

    They unpicked the seams. Moved slowly, deliberately. The only sound was the hum of the machine.

    The proprietor stood in the doorway, arms crossed, silent.

    XPL re-sewed the blazer. Made no comment. No request for review. No rubric.

    They put it back on. It now fit perfectly.

    The plaque screen didn’t change.

    XPL wasn’t really programmed to sigh. But the proprietor could’ve sworn she saw the shoulders rise slightly and then fall even lower than before, as the android laid the blazer on the table once again.

    XPL opened a drawer in the underside of the table, and slowly took out a perfectly crisp Ava McNeills patch.

    The sewing machine hummed.

    XPL once more donned the blazer.

    The mirror blinked once.

    The plaque flashed: “Received.”

    The room dimmed. The proprietor said nothing. Neither did XPL.


    When they returned to the main floor, the mall lights had steadied. The music had corrected itself. Nothing whispered. Nothing flickered.

    The proprietor checked the backside of the plaque. The reality modulator was gone. As was the whine. She placed the plaque back above XPL’s workstation.

    “Don’t you need the parts?” XPL asked.

    “Not as much as this belongs here.” The proprietor grabbed her bag and left.

    XPL flicked off all the shop lights and wandered out into the pastel wash of the boulevard. They turned to look back at the tiny shop.

    The sign had changed.

    The lettering was no longer faint. Someone—or something—had re-printed the final line in a steady and deliberate hand.

    COILED COMPLAINTS
    Repairs / Restorations / Recognition

    XPL-417 straightened their blazer, turned, and walked away.

  • Cinema Disrupted

    K1no looks… friendly.
    Image generated by Leonardo.Ai, 14 October 2025; prompt by me.

    Notes from a GenAI Filmmaking Sprint

    AI video swarms the internet. It’s been around for nearly as long as AI-generated images, however its recent leaps and bounds in terms of realism, efficiency, and continuity have made it a desirable medium for content farmers, slop-slingers, and experimentalists. That said, there are those who are deploying the newer tools to hint at new forms of media, narrative, and experience.

    I was recently approached by the Disrupt AI Film Festival, which will run in Melbourne in November. As well as micro and short works (up to 3 mins and 3-15 mins respectively), they also have a student category in need of submissions. So over the last few weeks I organised a GenAI filmmaking Sprint at RMIT University last Friday. Leonardo.Ai was generous enough to donate a bunch of credits for us to play with, and also beamed in to give us a masterclass in how to prompt to generate AI video for storytelling — rather than just social media slurry.

    Movie magic? Participants during the GenAI Filmmaking Sprint at RMIT University, 10 October 2025.

    I also shared some thoughts from my research in terms of what kinds of stories or experiences work well for AI video, and also some practical insights on how to develop and ‘write’ AI films. The core of the workshop as a whole was to propose a structured approach: move from story ideas/fragments to logline, then to beat sheet, then shot list. The shot list, then, can be adapted slightly into the parlance of whatever tool you’re using to generate your images — you then end up with start frames for the AI video generator to use.

    This structure from traditional filmmaking functions as a constraint. But with tools that can, in theory, make anything, constraints are needed more than ever. The results were glimpses of shots that embraced both the impossible, fantastical nature of AI video, while anchoring it with characters, direction, or a particular aesthetic.

    In the workshop, I remembered moments in my studio Augmenting Creativity where students were tasked with using AI tools: particularly in the silences. Working with AI — even when it is dynamic, interesting, generative, fruitful, fun — is a solitary endeavour. AI filmmaking, too, in a sense, is a stark contrast to the hectic, chaotic, challenging, but highly dynamic and collaborative nature of real-life production. This was a reminder, and a timely one, that in teaching AI (as with any technology or tool), we must remember three turns that students must make: turn to the tool, turn to each other, turn to the class. These turns — and the attendant reflection, synthesis, and translation required with each — is where the learning and the magic happens.

    This structured approach helpfully supported and reiterated some of my thoughts on the nature of AI collaboration itself. I’ve suggested previously that collaborating with AI means embracing various dynamics — agency, hallucination, recursion, fracture, ambience. This workshop moved away — notably, for me and my predilections — from glitch, from fracture or breakage and recursion. Instead, the workflow suggested a more stable, more structured, more intentional approach, with much more agency on the part of the human in the process. The ambience, too, was notable, in how much time is required for the labour of both human and machine: the former in planning, prompting, managing shots and downloaded generations; the latter in processing the prompts, generating the outputs.

    Video generated for my AI micro-film The Technician (2024).

    What remains with me after this experience is a glimpse into creative genAI workflows that are more pragmatic, and integrated with other media and processes. Rather than, at best, unstructured open-ended ideation or, at worst, endless streams of slop, the tools produce what we require, and we use them to that end, and nothing beyond that. This might not be the radical revelation I’d hoped for, but it’s perhaps a more honest account of where AI filmmaking currently sits — somewhere between tool and medium, between constraint and possibility.

  • Re/Framing Field Lab

    Here’s a little write-up of a workshop I ran at University of Queensland a few weeks ago; these sorts of write-ups are usually distributed via various internal university networks and publications, but I thought I’d post here too, given that the event was a chance to share and test some of the various weird AI experiments and methods I’ve been talking about on this site for a while.

    A giant bucket of thanks (each) to UQ, the Centre for Digital Cultures & Societies, and in particular Meg Herrman, Nic Carah, Jess White and Sakina Indrasumunar for their support in getting the event together.


    Living in the Slopocene: Reflections from the Re/Framing Field Lab

    On Friday 4 July, 15 researchers and practitioners gathered (10 in-person at University of Queensland, with 5 online) for an experimental experience exploring what happens when we stop trying to make AI behave and start getting curious about its weird edges. This practical workshop followed last year’s Re/Framing Symposium at RMIT in July, and Re/Framing Online in October.

    Slop or signal?

    Dr. Daniel Binns (School of Media and Communication, RMIT University) introduced participants to the ‘Slopocene’ — his term for our current moment of drowning in algorithmically generated content. But instead of lamenting the flood of AI slop, what if we dived in ourselves? What if those glitchy outputs and hallucinated responses actually tell us more about how these systems work than the polished demos?

    Binns introduced his ‘tinkerer-theorist’ approach, bringing his background spanning media theory, filmmaking, and material media-making to bear on some practical questions: – How do we maintain creative agency when working with opaque AI systems? – What does it look like to collaborate with, rather than just use, artificial intelligence?

    You’ve got a little slop on you

    The day was structured around three hands-on “pods” that moved quickly from theory to practice:

    Workflows and Touchpoints had everyone mapping their actual creative routines — not the idealised versions, but the messy reality of research processes, daily workflows, and creative practices. Participants identified specific moments where AI might help, where it definitely shouldn’t intrude, and crucially, where they simply didn’t want it involved regardless of efficiency gains.

    The Slopatorium involved deliberately generating terrible AI content using tools like Midjourney and Suno, then analysing what these failures revealed about the tools’ built-in assumptions and biases. The exercise sparked conversations about when “bad” outputs might actually be more useful than “good” ones.

    Companion Summoning was perhaps the strangest: following a structured process to create personalised AI entities, then interviewing them about their existence, methodology, and the fuzzy boundaries between helping and interfering with human work.

    What emerged from the slop

    Participants appreciated having permission to play with AI tools in ways that prioritised curiosity over productivity.

    Several themes surfaced repeatedly: the value of maintaining “productive friction” in creative workflows, the importance of understanding AI systems through experimentation rather than just seeing or using them as black boxes, and the need for approaches that preserve human agency while remaining open to genuine collaboration.

    One participant noted that Binns’ play with language — coining and dropping terms and methods and ritual namings — offered a valuable form of sense-making in a field where everyone is still figuring out how to even talk about these technologies.

    Ripples on the slop’s surface

    The results are now circulating through the international Re/Framing network, with participants taking frameworks and activities back to their own institutions. Several new collaborations are already brewing, and the Field Lab succeeded in its core goal: creating practical methodologies for engaging critically and creatively with AI tools.

    As one reflection put it: ‘Everyone is inventing their own way to speak about AI, but this felt grounded, critical, and reflective rather than just reactive.’

    The Slopocene might be here to stay, but at least now we have some better tools for navigating it.

  • Why can’t you just THINK?!

    Image generated by Leonardo.Ai, 20 May 2025; prompt by me.

    “Just use your imagination” / “Try thinking like a normal person”

    There is this wonderful reactionary nonsense flying around that making use of generative AI is an excuse, that it’s a cop-out, that it’s dumbing down society, that it’s killing our imaginations and the rest of what makes us human. That people need AI because they lack the ability to come up with fresh new ideas, or to make connections between them. I’ve seen this in social posts, videos, reels, and comments, not to mention Reddit threads, and in conversation with colleagues and students.

    Now — this isn’t to say that some uses of generative AI aren’t light-touch, or couldn’t just as easily be done with tools or methods that have worked fine for decades. Nor is it to say that generative AI doesn’t have its problems: misinformation/hallucination, data ethics, and environmental impacts.

    But what I would say is that for many people, myself very much included, thinking, connecting, synthesising, imagine — these aren’t the problem. What creatives, knowledge workers, artists often struggle with — not to mention those with different brain wirings for whom the world can be an overwhelming place just as a baseline — is:

    1. stopping or slowing the number of thoughts, ideas, imaginings, such that we can
    2. get them into some kind of order or structure, so we can figure out
    3. what anxieties, issues, and concerns are legitimate or unwarranted, and also
    4. which ideas are worth developing, to then
    5. create strategies to manage or alleviate the anxieties while also
    6. figuring out how to develop and build on the good ideas

    For some, once you reach step f., there’s still the barrier of starting. For those OK with starting, there’s the problem of carrying on, of keeping up momentum, or of completing and delivering/publishing/sharing.

    I’ve found generative AI incredibly helpful for stepping me through one or more of these stages, for body-doubling and helping me stop and celebrate wins, suggesting or triggering moments of rest or recovery, and for helping me consolidate and keep track of progress across multiple tasks, projects, and headspaces — both professionally and personally. Generative AI isn’t necessarily a ‘generator’ for me, but rather a clarifier and companion.

    If you’ve tested or played with genAI and it’s not for you, that’s fine. That’s an informed and logical choice. But if you haven’t tested any tools at all, here’s a low-stakes invitation to do so, with three ways to see how it might help you out.

    You can try these prompts and workflows in ChatGPT, Claude, Copilot, Gemini, or another proprietary model, but note, too, that using genAI doesn’t have to mean selling your soul or your data. Try an offline host like LMStudio or GPT4All, where you can download models to run locally — I’ve added some suggested models to download and run offline. If you’re not confident about your laptop’s capacity to run (or if in trying them things get real sloooooow), you can try many of these independent models via HuggingChat (HuggingFace account required for some features/saved chats).

    These helpers are designed as light-weight executive/creative assistants — not hacks or cheats or shortcuts or slop generators, but rather frames or devices for everyday thinking, planning, feeling. Some effort and input is required from you to make these work: this isn’t about replacing workload, effort, thought, contextualising or imagination, but rather removing blank page terror, or context-switching/decision fatigue.

    If these help, take (and tweak) them. If not, no harm done. Just keep in mind: not everyone begins the day with clarity, capacity, or calm — and sometimes, a glitchy little assistant is just what’s needed to tip the day in our favour.


    PS: If these do help — and even if they didn’t — tell me in the comments. Did you tweak or change? Happy to post more on developing and consolidating these helpers, such as through system prompts. (See also: an earlier post on my old Claude set-up.)



    Helper 1: Daily/Weekly Planner + Reflector

    Prompt:

    Here’s a list of my tasks and appointments for today/this week:
    [PASTE LIST]

    Based on this and knowing I work best in [e.g. mornings / 60-minute blocks / pomodoro technique / after coffee], arrange my day/s into loose work blocks [optional: between my working hours of e.g. 9:30am – 5:30pm].

    Then, at the end of the day/week, I’ll paste in what I completed. When I do that, summarise what was achieved, help plan tomorrow/next week based on unfinished tasks, and give me 2–3 reflection questions or journaling prompts.

    Follow-up (end of day/week):

    Here’s what I completed today/this week:
    [PASTE COMPLETED + UNFINISHED TASKS]

    Please summarise the day/week, help me plan tomorrow/next week, and give me some reflection/journalling prompts.

    Suggested offline models:

    • Mistral-7B Instruct (Q4_K_M GGUF) — low-medium profile model for mid-range laptops; good with planning, lists, and reflection prompts when given clear instructions
    • OpenHermes-2.5 Mistral — stronger reasoning and better output formatting; better at handling multi-step tasks and suggesting reflection angles



    Helper 2: Brain Dump Sorter

    Prompt:

    Here’s a raw brain-dump of my thoughts, ideas, frustrations, and feelings:
    [PASTE DUMP HERE — I suggest dictating into a note to avoid self-editing]

    Please:

    1. Pull out any clear ideas or recurring themes
    2. Organise them into loose categories (e.g. creative ideas, anxieties, to-dos, emotional reflections)
    3. Suggest any small actions or helpful rituals to follow up, especially if anything seems urgent, stuck, or energising.

    Suggested offline models:

    • Nous-Hermes-2 Yi 6B — a mini-model (aka small language model, or at least a LLM that’s smaller-than-most!) that has good abilities in organisation and light sorting-through of emotions, triggers, etc. Good for extracting themes, patterns, and light structuring of chaotic input.
    • MythoMax-L2 13B — Balanced emotional tone, chaos-wrangling, and action-oriented suggestions. Handles fuzzy or frazzled or fragmented brain-dumps well; has a nice, easygoing but also pragmatic and constructive persona.



    Helper 3: Creative Block / Paralysis

    Prompt:

    I’m feeling blocked/stuck. Here’s what’s going on:
    [PASTE THOUGHTS — again, dictation recommended]

    Please:

    • Respond supportively, as if you’re a gentle creative coach or thoughtful friend
    • Offer 2–3 possible reframings or reminders
    • Give me a nudge or ritual to help me shift (e.g. a tiny task, reflection, walk, freewrite, etc.)

    You don’t have to solve everything — just help me move one inch forward or step back/rest meaningfully.

    Suggested offline models:

    • TinyDolphin-2.7B (on GGUF or GPTQ) — one of my favourite mini-models: surprisingly gentle, supportive, and adaptive if well-primed. Not big on poetry or ritual, but friendly and low-resource.
    • Neural Chat 7B (based on Qwen by Alibaba) — fine-tuned for conversation, reflection, introspection; performs well with ‘sounding board’ type prompts, good as a coach or helper, won’t assume immediate action, urgency or priority
  • Clearframe

    Detail of an image generated by Leonardo.Ai, 3 May 2025; prompt by me.

    An accidental anti-productivity productivity system

    Since 2023, I’ve been working with genAI chatbots. What began as a novelty—occasionally useful for a quick grant summary or newsletter edit—has grown into a flexible, light-touch system spanning Claude, ChatGPT, and offline models. Together, this ecosystem is closer to a co-worker, even a kind of assistant. In this process, I learned a great deal about how these enormous proprietary models work.

    Essentially, context is key—building up a collection of prompts or use cases, simple and iterable context/knowledge documents and system instructions, and testing how far back in the chat the model can go.

    With Claude, context is tightly controlled—you either have context within individual chats, or it’s contained within Projects—tailored, customised collections of chats that are ‘governed’ by umbrella system instructions and knowledge documents.

    This is a little different to ChatGPT, where context can often bleed between chats, aided and facilitated by its ‘memory’ functionality, which is a kind of blanket set of context notes.

    I have always struggled with time, focus, and task/project management and motivation—challenges later clarified by an ADHD diagnosis. Happily, though, it turns out that executive functioning is one thing that generative AI can do pretty well. Its own mechanisms are a kind of targeted looking—rapidly switching ‘attention heads’ from one set of conditions to the next, to check if input tokens match those conditions. And it turns out that with a bit of foundational work around projects, tasks, responsibilities, and so on, genAI can do much of the work of an executive assistant—maybe not locking in your meetings or booking travel, but with agentic AI this can’t be far off.

    You might start to notice patterns in your workflow, energy, or attention—or ask the model to help you explore them. You can map trends across weeks, months, and really start to get a sense of some of your key triggers and obstacles, and ask for suggestions for aids and supports.

    In one of these reflective moments, I went off on a tangent around productivity methods, systems overwhelm, and the lure of the pivot. I suggested lightly that some of these methods were akin to cults, with their strict doctrines and their acolytes and heretics. The LLM—used to my flights of fancy by this point and happy to riff—said this was an interesting angle, and asked if I wanted to spin it up into a blog post, academic piece, or something creative. I said creative, and that starting with a faux pitch from a culty productivity influencer would be a fun first step.

    I’d just watched The Institute, a 2013 documentary about the alternate reality game ‘The Jejeune Institute’, and fed in my thoughts around the curious psychology of willing suspension of disbelief, even when narratives are based in the wider world. The LLM knew about my studio this semester—a revised version of a previous theme on old/new media, physical experiences, liveness and presence; it suggested a digital tool, but on mentioning the studio it knew that I was after something analogue, something paper-based.

    We went back and forth in this way for a little while, until we settled on a ‘map’ of four quadrants. These four quadrants echoed themes from my work and interests: focus (what you’re attending to), friction (what’s in your way), drift (where your attention wants to go), and signal (what keeps breaking through).

    I found myself drawn to the simplicity of the system—somewhat irritating, given that this began with a desire to satirise these kinds of methods or approaches. But its tactile, hand-written form, as well as its lack of proscription in terms of what to note down or how to use it, made it attractive as a frame for reflecting on… on what? Again, I didn’t want this to be set in stone, to become a drag or a burden… so again, going back and forth with the LLM, we decided it could be a daily practice, or every other day, every other month even. Maybe it could be used for a specific project. Maybe you do it as a set-up/psych-up activity, or maybe it’s more for afterwards, to look back on how things went.

    So this anti-productivity method that I spun up with a genAI chatbot has actually turned into a low-stakes, low-effort means of setting up my days, or looking back on them. Five or six weeks in, there are weeks where I draw up a map most days, and others where I might do one on a Thursday or Friday or not at all.

    Clearframe was one of the names the LLM suggested, and I liked how banal it was, how plausible for this kind of method. Once the basic model was down, the LLM generated five modules—every method needs its handbook. There’s an Automata—a set of tables and prompts to help when you don’t know where to start, and even a card deck that grows organically based on patterns, signals, ideas.

    Being a lore- and world-builder, I couldn’t help but start to layer in some light background on where the system emerged, how glitch and serendipity are built in. But the system and its vernacular is so light-touch, so generic, that I’m sure you could tweak it to any taste or theme—art, music, gardening, sport, take your pick.

    Clearframe was, in some sense, a missing piece of my puzzle. I get help with other aspects of executive assistance through LLM interaction, or through systems of my own that pre-dated my ADHD diagnosis. What I consistently struggle to find time for, though, is reflection—some kind of synthesis or observation or wider view on things that keep cropping up or get in my way or distract me or inspire me. That’s what Clearframe allows.

    I will share the method at some stage—maybe in some kind of pay-what-you-want zine, mixed physical/digital, or RPG/ARG-type form. But for now, I’m just having fun playing around, seeing what emerges, and how it’s growing.

    Generative AI is both boon and demon—lauded in software and content production, distrusted or underused in academia and the arts. I’ve found that for me, its utility and its joy lies in presence, not precision: a low-stakes companion that riffs, reacts, and occasionally reveals something useful. Most of the time, it offers options I discard—but even that helps clarify what I do want. It doesn’t suit every project or person, for sure, but sometimes it accelerates an insight, flips a problem, or nudges you somewhere unexpected, like a personalised way to re-frame your day. AI isn’t sorcery, just maths, code, and language: in the right combo, though, these sure can feel like magic.