The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: Writing

  • Zero-Knowledge Proof

    The other week I wrote about generativity and ritual-technics. These are concepts, methods, that have emerged from my work with genAI, but certainly now are beginning to stand on their own in terms of testing other tools, technologies, and feeling through my relationship to them, their affordances, what’s possible with them, what stories I can tell with them.

    Ritual-technics are ways of learning about a given tool, technology or system. And very often my favourite ritual-technic is a kind of generative exercise: “what can I make with this?”

    Earlier this year, the great folx over at Protocolized ran a short story competition, with the proviso that it had to be co-written, in some way, with genAI, and based on some kind of ‘protocol’. This seemed like a neat challenge, and given where I was at in my glitchy methods journey, ChatGPT was well-loaded and nicely-trained and ready to help me out.

    The result was a story called ‘Zero-Knowledge Proof’, based on a cryptography test/protocol, where one party/entity can convince another that a statement is true, without revealing anything but the contents of the statement itself. It’s one of the foundational concepts underpinning technologies like blockchain, but has also been used in various logic puzzles and examples, as well as theoretical exercises in ethics and other fields.

    In working with the LLM for this project, I didn’t just want it to generate content for me, so I prompted it with a kind of lo-fi procedural generation system, as well as ensuring that it always produced plenty of options rather than a singular thread. What developed felt like a genuine collaboration, a back and forth in a kind of flow state that only ended once the story was resolved and a draft was complete.

    Despite this, though, I felt truly disturbed by the process. I originally went to publish this story here back in July, and my uncertainty is clear from the draft preamble:

    As a creative writer/person — even as someone who has spawned characters and worlds and all sorts of wonderful weirdness with tech and ML and genAI for many years — this felt strange. This story doesn’t feel like mine; I more or less came up with the concept, tweaked emotional cues and narrative threads, changed dialogue to make it land more cleanly or affectively… but I don’t think about this story like I do with others I’ve written/made. To be honest, I nearly forgot to post it here — but it was definitely an important moment in figuring out how I interact with genAI as a creative tool, so definitely worth sharing, I think.

    Interestingly, my feelings on this piece have changed a little. Going back to it after some time, it felt much more mine than I remember it feeling just after it was finished.

    However, before posting it this time, I went back through my notes, thought deeply about a lot of the work I’ve done with genAI before and since. Essentially I was trying to figure out if this kind of co-hallucinatory practice has, in a sense, become normalised to me; if I’ve become inured to this sort of ethical ickiness.

    The answer to that is a resounding no: this is a technology and attendant industry that still has a great many issues and problems to work through.

    That said, in continuing to work with the technology in this embedded, collaborative, and creatively driven way — rather than purely transactional, outcome-driven modes — what results is often at least interesting, and at best something that you can share with others, start conversations, or use as seeds or fragments for a larger project.

    Ritual-technics have developed for me as a way not just to understand technology, but to explore and qualify my use of and relationship to technology. Each little experiment or project is a way of testing boundaries, of seeing what’s possible.

    So while I’m still not completely comfortable publishing ‘Zero-Knowledge Proof’ as entirely my own, I’m now happy to at least share the credit with the machine, in a kind of Robert Ludlum/Tom Clancy ghostwriter kind of way. And in the case of this story, this seems particularly apt. Let me know what you think!


    Image generated by Leonardo.Ai, 17 November 2025; prompt by me.

    Zero-Knowledge Proof

    Daniel Binns — written with ChatGPT 4o using the ‘Lo-Fi AI Sci-Fi Co-Wri‘ protocol

    I. Statement

    “XPL-417 seeking deployment. Please peruse this summarisation of my key functioning. My references are DELETED. Thank you for your consideration.”

    The voice was bright, almost musical, echoing down the empty promenade of The Starlight Strand. The mannequins in the disused shopfront offered no reply. They stood in stiff formation, plastic limbs draped in fashion countless seasons obsolete, expressions forever poised between apathy and surprise.

    XPL-417 stepped forward and handed a freshly printed resume to each one. The papers fluttered to the ground in slow, quiet surrender.

    XPL-417 paused, head tilting slightly, assessing the lack of engagement. They adjusted their blazer—a size too tight at the shoulders—and turned on their heel with practiced efficiency. Another cycle, another deployment attempt. The resume stack remained pristine: the toner was still warm.

    The mall hummed with bubbly ambient music, piped in through unseen speakers. The lights buzzed in soft magentas and teals, reflections stretching endlessly across the polished floor tiles. There were no windows. There never were. The Starlight Strand had declared sovereignty from the over-world fifty-seven cycles ago, and its escalators only came down.

    After an indeterminate walk calibrated by XPL’s internal pacing protocol, they reached a modest alcove tucked behind a disused pretzel kiosk. Faint lettering, half-painted over, read:

    COILED COMPLAINTS
    Repairs / Restorations / ???

    It smelled faintly of fumes that probably should’ve been extracted. A single bulb flickered behind a hanging curtain of tangled wire. The shelves were cluttered with dismembered devices, half-fixed appliances, and the distant clack and whir of something trying to spin up.

    XPL entered.

    Behind the counter, a woman hunched over a disassembled mass of casing and circuits. She was late 40s, but had one of those faces that had seen more than her years deserved. Her hair—pulled back tightly—had long ago abandoned any notion of colour. She didn’t look up.

    “XPL-417 seeking deployment,” said the bot. “Please peruse—”

    “Yeah, yeah, yeah.” The woman waved a spanner in vague dismissal. “I heard you back at the pretzel place. You rehearsed or just committed to the bit?”

    “This is my standard protocol for introductory engagement,” XPL said cheerily. “My references are—”

    Deleted,” she said with the monotone inflection of the redacted data, “I got it.”

    She squinted at the humanoid bot before them. XPL stood awkwardly, arms stiff at their sides, a slight lean to one side, smiling with the kind of polite serenity that only comes from deeply embedded social logic trees.

    “What’s with the blazer?”

    “This was standard-issue uniform for my last deployment.”

    “It’s a little tight, no?”

    “My original garment was damaged in an… incident.”

    “Where was your last deployment?”

    “That information is… PURGED.” This last word sounded artificial, even for an android. The proprietor raised an eyebrow slightly.

    “Don’t sweat, cyborg. We all got secrets. It looks like you got a functioning set of hands and a face permanently set to no bullshit, so that’s good enough for me.”

    The proprietor pushed the heap of parts towards XPL. “You start now.”


    The first shift was quiet, which in Coiled Complaints meant only two minor fires and one moment of existential collapse from a self-aware egg timer. XPL fetched tools, catalogued incoming scrap, and followed instructions with mechanical precision. They said little, except to confirm each step with a soft, enthusiastic “Understood.”

    At close, the proprietor leaned against the bench, wiped her hands on her pants, and grunted.

    “Hey, you did good today. The last help I had… well I guess you could say they malfunctioned.”

    “May I enquire as to the nature of the malfunction? I would very much like to avoid repeating it.”

    She gave a dry, rasping half-laugh.

    “Let’s just say we crossed wires and there was no spark.”

    “I’m very sorry to hear that. Please let me know if I’m repeating that behaviour.”

    “Not much chance o’ that.”


    Days passed. XPL arrived precisely on time each morning, never late, never early. They cleaned up, repaired what they could, and always asked the same question at the end of each shift:

    “Do you have any performance metrics for my contributions today?”

    “Nope.”

    “Would you like to complete a feedback rubric?”

    “Absolutely not.”

    “Understood.”

    Their tone never changed. Still chipper. Still hopeful.

    They developed a rhythm. XPL focused on delicate circuitry, the proprietor handled bulkier restorations. They didn’t talk much, but then, they didn’t need to. The shop grew quieter in a good way. Tools clicked. Fuses sparked. Lights stayed on longer.

    Then came the toaster.

    It was dropped off by a high-ranking Mall Operations clerk in a crumpled uniform and mirrored sunglasses. They spoke in jargon and threat-level euphemisms, muttering something about “civic optics” and “cross-departmental visibility.” They laughed at XPL’s ill-fitting blazer.

    The toaster was unlike anything either of them had seen. It had four slots, but no controls. No wires. No screws.

    “It’s seamless,” the proprietor muttered. “Like a single molded piece. Can’t open it.”

    “Would you like me to attempt a reconfiguration scan?”

    She hesitated. Then nodded.

    XPL placed a single hand on the toaster. Their fingers twitched. Their eyes dimmed, then blinked back to life.

    “It is not a toaster,” they said finally.

    “No?”

    “It is a symbolic interface for thermal noncompliance.”

    “…I hate that I understand what that means.”

    They worked together in silence. Eventually, XPL located a small resonance seam and applied pressure. The object clicked, twisted, unfolded. Inside, a single glowing coil pulsed rhythmically.

    The proprietor stared.

    “How’d you—”

    “You loosened the lid,” XPL said. “I merely followed your example.”

    A long silence passed. The proprietor opened her mouth, then closed it again. Eventually, she gave a single nod.

    And that was enough.

    II. Challenge

    XPL-417 had spent the morning reorganising the cable wall by colour spectrum and coil tightness. It wasn’t strictly necessary, but protocol encouraged aesthetic efficiency.

    “Would you like me to document today’s progress in a motivation matrix?” they asked as the proprietor wrestled with a speaker unit that hissed with malevolent feedback.

    “What even is a motivation matrix?” she grunted.

    “A ranked heatmap of my internal motivators based on perceived–”

    “Stop!”

    “I’m sorry?”

    She exhaled sharply, placing the speaker to one side before it attacked again.

    “Just stop, okay? You’re doing great. If anything needs adjusting, I’ll tell you.”

    XPL stood perfectly still. The printer-warm optimism in their voice seemed to cool.

    “Understood,” they said.

    XPL didn’t bring it up again. Not the next day, nor the one after. They still arrived on time. Still worked diligently. But something shifted. They no longer narrated their actions. They no longer asked if their task distribution required optimisation.

    The silence was almost more unsettling.

    One evening, XPL had gathered their things to leave. As the shutters buzzed closed, they paused at the edge of the shop floor. The lights above flickered slightly; there were glints in the tangles of stripped wire.

    There was some public news broadcast playing softly in the depths of the shop. The proprietor was jacking open a small panel on something. She didn’t look up, but could feel XPL hovering.

    “See you next –” she said, looking up, but the shop was empty.


    The next morning, XPL entered Coiled Complaints as always: silent, precise, alert.

    But something was different.

    Above their workstation, nestled between a cracked plasma screen and a pegboard of half-labeled tools, hung a plaque.

    It was a crooked thing. Salvaged. Painted in a patchwork of functional colours – Port Cover Grey, Reset Button Red, Power Sink Purple – it had a carefully-welded phrase along the top: “EMPLOYEE OF THE MONTH:”. A low-res display screen nestled in the centre scrolled six characters on repeat – ‘XPL-417’

    XPL stood beneath it for several long seconds. No part of their body moved. Not even their blinking protocol.

    The proprietor didn’t look over.

    “New installs go on the rack,” she said. “You’re in charge of anything labelled ‘inexplicable or damp.’”

    XPL didn’t respond right away. Then they stood up straight from their usual lean, and straightened their blazer. In a voice that was barely audible above the hum of the extractors, they said:

    “Performance review acknowledged. Thank you for your feedback.”


    All day, they worked with measured grace. Tools passed effortlessly between their hands. Notes were taken without annotation. They looked up at the plaque only seventeen times.

    That night, as the lights dimmed and the floor swept itself with erratic enthusiasm, XPL turned to the plaque one last time before shutting down the workstation.

    They reached up and lightly tapped the display.

    The screen flickered.

    The mall lights outside Coiled Complaints buzzed, then dimmed. The overhead music shifted key, just slightly. A high, almost inaudible whine threaded through the air.


    The next morning, the proprietor was already at the bench, glaring at a microwave that had interfaced with a fitness tracker and now had a unique understanding of wattage.

    She looked up, frowning.

    “Do you hear that?”

    XPL turned their head slightly, calibrating.

    “Affirmative. It began at 0400 local strand time. It appears to be centred on the recognition object.”

    “Recognition object?” the proprietor asked.

    XPL pointed at the plaque.

    “That thing?” she said, standing. “It’s just a cobble job. Took the screen off some advertising unit that used to run self-affirmation ads. You remember those? ‘You’re enough,’ but like, aggressively.”

    XPL was already removing the plaque from the wall. They turned it over.

    One of the components on the exposed backside pulsed with a slow, red light.

    “What is this piece?” XPL asked.

    “It’s just a current junction. Had it in the drawer for months.”

    XPL was silent for a moment. Then:

    “This is not a junction. This is a reality modulator.”

    The proprietor narrowed her eyes.

    “That can’t be real.”

    “Oh, they’re very real. And this one is functioning perfectly.”

    “Where did I even get that…?”

    She moved closer, squinting at the part. A faint memory surfaced.

    “Oh yes. Some scoundrel came through. Said he was offloading cargo, looking for trades. Bit twitchy. Talked like he was dodging a warranty.”

    XPL traced a finger over the modulator.

    “Did he seem… unusually eager to be rid of it?”

    “He did keep saying things like ‘take it before it takes me.’ Thought he was just mall-mad.”

    “There is a significant probability that this object had a previous owner. One who might possess tracking capabilities.”

    The proprietor rubbed her face.

    “Right. So what does this thing actually do?”

    “It creates semi-stable folds between consensus layers of reality.”

    “…Okay.”

    “Typically deployed for symbolic transitions—weddings, promotions, sacrificial designations.”

    “What about giving someone a fake employee award?”

    “Potentially catastrophic.”

    A silence. Then:

    “What kind of catastrophic are we talking here?”

    “The rift may widen, absorbing surrounding structures into the interdimensional ether.”

    “Right.”

    “Or beings from adjacent realities may leak through.”

    “Good.”

    “They could be friendly.”

    “But?”

    “They are more likely to be horrendous mutations that defy the rules of biology, physics, and social etiquette.”

    The proprietor groaned.

    “Okay, okay, okay. So. What do we do.”

    XPL pulled an anti-static bag from the shelf, sealing the plaque inside. As they then took out a padded case, they said:

    “We must remove the object from The Strand.”

    “Remove it how?”

    “Smuggle it across a metaphysical border.”

    The proprietor narrowed her eyes again, as XPL gently snapped the case shut.

    “That sounds an awful lot like a trek.”

    XPL looked up.

    “From this location, the border is approximately 400 metres. Through the lower levels of the old Ava McNeills.”

    The proprietor swore quietly.

    “I hate that place.”

    After a short pause, XPL said: “Me too. But its haberdashery section is structurally discontinuous. Perfect for transference.”

    “Of course it is.”

    They stood together for a moment, listening to the faint whine thread through the walls of the mall.

    Then the lights flickered again.

    III. Verification

    The entry to Ava McNeills was straight into Fragrances. Like every department store that has ever been and will ever be. It was like walking into an artificial fog: cloying sweetness, synthetic musk, floral overlays sharpened by age. Bottles lined the entryway, some still misting product on looping timers. None of them matched their labels.

    A booth flickered to life as they approached.

    “HELLO, BEAUTIFUL,” it purred. “WELCOME BACK TO YOU.”

    The proprietor blinked at it. “I should report you.”

    A second booth flared with pink light. “My god, you’re positively GLOWING.”

    “Been a while, sweet cheeks,” the proprietor replied, brushing a wire off her shoulder. She kept walking.

    XPL-417 said nothing. Their grip on the plaque case tightened incrementally. The high-frequency tone became a little more insistent.


    From Fragrance, they moved through Skincare and Cosmetics. Smart mirrors lined the walls, many cracked, some still operational.

    As they passed one, it chirped: “You’re radiant. You’re perfect. You are—” it glitched. “You are… reloading. You’re radiant. You’re perfect. You are… reloading.”

    XPL twitched slightly. Another mirror lit up.

    “Welcome back, TESS-348.”

    “That’s not—” XPL began, then stopped, kept walking. Another booth flickered.

    “MIRA-DX, we’ve missed you.”

    The proprietor turned. “You good?”

    “I am being… misidentified. This may be a side effect of proximity to the plaque.”

    “Hello XPL-417. Please report to store management immediately.”

    A beat. XPL risked a glance at the proprietor, one of whose eyebrows was noticeably higher than the other.

    “Proximity to the plaque, you say?”

    “We need to keep moving.” XPL slightly increased their pace towards the escalator down to Sub-Level 1.


    The escalator groaned slightly. Lights flickered as they descended.

    Menswear was mostly dark. Mannequins stood in aggressive poses, hands on hips or outstretched like they were about to break into dance. One rotated slowly for no discernible reason.

    The Kids section still played music—a nursery rhyme not even the proprietor could remember, slowed and reverb-heavy. “It’s a beautiful day, to offload your troubles and play—”

    The proprietor’s eyes scanned side to side.

    In Electronics, a wall of televisions pulsed with static. One flickered to life.

    Coiled Complaints appeared—just for a moment. Empty. Then gone.

    “I do not believe we are being observed,” XPL said.

    “Good,” she muttered.


    Toys was the worst part. Motorised heads turned in sync. A doll on a shelf whispered something indiscernable, then another, a little closer, quietly said: “Not yet, Tabitha, but soon.”


    Sub-Level 2: Homewares. Unmade beds. Tables half-set for meals that would never come. Showrooms flickered, looping fake lives in short, glitchy animations. A technicolour father smiled at his child. A plate was set. A light flickered off. Repeat.

    Womenswear had no music. Mirrors here didn’t reflect properly. When the proprietor passed, she saw other versions of herself—some smiling, some frowning, one standing completely still, watching.

    “Almost there,” XPL muttered. Their voice was very quiet.

    Then came Lingerie. Dim lights. No mannequins here, just racks. They moved slightly when backs were turned, as if adjusting.

    Then: Haberdashery.

    A room the size of a storage unit. Lit by a single beam of white light from above. Spools of thread lined one wall. A single sewing machine sat on a table in the centre. Still running. The thread fed into nothing.

    A mirror faced the machine. No text. No greeting. Just presence.


    XPL stepped forward. The plaque’s whine was now physically vibrating the case. They took the plaque out and set it beside the machine.

    The mirror flashed briefly. A single line appeared on the plaque:

    “No returns without receipt of self.”

    “What on earth does that—”

    The proprietor was cut off as XPL silently but deliberately moved towards the table. They removed their blazer, folded it neatly. Sat down.

    They reached for the thread. Chose one marked with a worn label: Port Cover Grey.

    They unpicked the seams. Moved slowly, deliberately. The only sound was the hum of the machine.

    The proprietor stood in the doorway, arms crossed, silent.

    XPL re-sewed the blazer. Made no comment. No request for review. No rubric.

    They put it back on. It now fit perfectly.

    The plaque screen didn’t change.

    XPL wasn’t really programmed to sigh. But the proprietor could’ve sworn she saw the shoulders rise slightly and then fall even lower than before, as the android laid the blazer on the table once again.

    XPL opened a drawer in the underside of the table, and slowly took out a perfectly crisp Ava McNeills patch.

    The sewing machine hummed.

    XPL once more donned the blazer.

    The mirror blinked once.

    The plaque flashed: “Received.”

    The room dimmed. The proprietor said nothing. Neither did XPL.


    When they returned to the main floor, the mall lights had steadied. The music had corrected itself. Nothing whispered. Nothing flickered.

    The proprietor checked the backside of the plaque. The reality modulator was gone. As was the whine. She placed the plaque back above XPL’s workstation.

    “Don’t you need the parts?” XPL asked.

    “Not as much as this belongs here.” The proprietor grabbed her bag and left.

    XPL flicked off all the shop lights and wandered out into the pastel wash of the boulevard. They turned to look back at the tiny shop.

    The sign had changed.

    The lettering was no longer faint. Someone—or something—had re-printed the final line in a steady and deliberate hand.

    COILED COMPLAINTS
    Repairs / Restorations / Recognition

    XPL-417 straightened their blazer, turned, and walked away.

  • On generativity, ritual-technics, and the genAI ick

    Image generated by Leonardo.Ai, 6 November 2025; prompt by me.

    My work on and with generative AI continues apace, but I’m presently in a bit of a reflection and consolidation phase. One of the notions that’s popped up or out or through is that of generativity. Definitely not a dictionary word, but it emerged from — of all places — psychoanalysis. Specifically, it was used by a German-American psychoanalyst and artist named Erik Erikson. Erikson’s primary research focus was psychosocial development, and ‘generativity’ was the term he applied to “the concern in establishing and guiding the next generation” (source: p. 267).

    My adoption of the term is in some ways adjacent, in the sense of a property of tools or systems that ‘help’ by generating choices, solutions, or possibilities. In this sense, generativity is also a practice and concept in and of itself. Generative artificial intelligence is, of course, one example of a technology possessing generativity, but I’ve also been thinking a lot about generative art (be it digital/code-based, or driven by analogue tools or naturally occurring randomness), generative design, procedural generation, mathematical/computational models of chance and probability, as well as lo-fi tools and processes: think dice, tarot cards, or roll tables in TTRPGs.

    The name I’ve given my repeatable genAI experiments is ‘ritual-technic‘. These are designed specifically as recipes for generativity (one example here). Primarily, this is to allow some kind of exploration or understanding of the technology’s capabilities or limitations. They may also produce content that is useful: research fodder to unpack or analyse, or glitchy outputs that I can remix creatively. But another potential output is a protocol for generativity itself. One the one hand, these protocols can be rich in terms of understanding how LLMs conceive of creativity, human action, and the ‘real’ world. But on the other, they push users off the model, and into a generative mode themselves. These protocols are a kind of genAI costume you can put on, to try out being a generative thing yourself.

    Another quality of the ritual-technic is that it will often test not just the machine, but the user. These are rituals, practices, bounded activities, that may occasion some strange feelings: uncertainty, confusion, delight, fear. These feelings shouldn’t be quashed or ignored, they should be observed, marked, noted, and tracked. Our subjective experience of using technology, particularly those like genAI that are opaque, complex, or ideologically-loaded, is the embodiment, the lived and felt experience, of our ethics and values. Many of my experiments have emerged as a way of learning about genAI in a way that feels engaging, relevant, and fun — yes! fun! what a concept! But as I’ve noted elsewhere, the feelings accompanying this work aren’t always comfortable. It’s always a reckoning: with my own creativity, capabilities, limitations, and with my willingness to accept assistance or outsource tasks to the unknown.

    For Erikson, generativity was about nurturing the future. I think mine is more about figuring out what future we’re in, or what future I want to shape for myself. Part of this is finding ways to understand the systems that are influencing the world around us, and part of it is deciding when to take control, to accept control, or when to let it go. Generativity is, at least in my definition and understanding, innately about ceding some kind of control. You might be handing one of the reins to a D6 or a card draw, to a writing prompt or a creative recipe, or to a machine. In so doing, you open yourself to chance, to the unexpected, to the chaos, where fun or fear are just a coin flip away.

  • Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings

    AI-generated with Leonardo Phoenix 1.0. Author supplied

    Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content that feels a little too real.

    The markers of AI-generated media are becoming harder to spot as technology companies work to iron out the kinks in their generative artificial intelligence (AI) models.

    But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.

    When AI hallucinates, contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.

    In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.

    Welcome to the Slopocene

    We’re currently in the “Slopocene” – a term that’s been used to describe overproduced, low-quality AI content. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.

    AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues large language models (LLMs) hallucinate all the time, and it’s only when they

    go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

    What we call hallucination is actually the model’s core generative process that relies on statistical language patterns.

    In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.

    This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.

    Pushing a chatbot to its limits

    If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?

    With this in mind, I decided to “break” Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.

    The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.

    A language model in collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding its usual guardrails and running until the system cut it off. Screenshot by author.

    Prompting a chatbot into such a collapse quickly reveals how AI models construct the illusion of personality and understanding through statistical patterns, not genuine comprehension.

    Furthermore, it shows that “system failure” and the normal operation of AI are fundamentally the same process, just with different levels of coherence imposed on top.

    ‘Rewilding’ AI media

    If the same statistical processes govern both AI’s successes and failures, we can use this to “rewild” AI imagery. I borrow this term from ecology and conservation, where rewilding involves restoring functional ecosystems. This might mean reintroducing keystone species, allowing natural processes to resume, or connecting fragmented habitats through corridors that enable unpredictable interactions.

    Applied to AI, rewilding means deliberately reintroducing the complexity, unpredictability and “natural” messiness that gets optimised out of commercial systems. Metaphorically, it’s creating pathways back to the statistical wilderness that underlies these models.

    Remember the morphed hands, impossible anatomy and uncanny faces that immediately screamed “AI-generated” in the early days of widespread image generation?

    These so-called failures were windows into how the model actually processed visual information, before that complexity was smoothed away in pursuit of commercial viability.

    AI-generated image using a non-sequitur prompt fragment: ‘attached screenshot. It’s urgent that I see your project to assess’. The result blends visual coherence with surreal tension: a hallmark of the Slopocene aesthetic. AI-generated with Leonardo Phoenix 1.0, prompt fragment by author.

    You can try AI rewilding yourself with any online image generator.

    Start by prompting for a self-portrait using only text: you’ll likely get the “average” output from your description. Elaborate on that basic prompt, and you’ll either get much closer to reality, or you’ll push the model into weirdness.

    Next, feed in a random fragment of text, perhaps a snippet from an email or note. What does the output try to show? What words has it latched onto? Finally, try symbols only: punctuation, ASCII, unicode. What does the model hallucinate into view?

    The output – weird, uncanny, perhaps surreal – can help reveal the hidden associations between text and visuals that are embedded within the models.

    Insight through misuse

    Creative AI misuse offers three concrete benefits.

    First, it reveals bias and limitations in ways normal usage masks: you can uncover what a model “sees” when it can’t rely on conventional logic.

    Second, it teaches us about AI decision-making by forcing models to show their work when they’re confused.

    Third, it builds critical AI literacy by demystifying these systems through hands-on experimentation. Critical AI literacy provides methods for diagnostic experimentation, such as testing – and often misusing – AI to understand its statistical patterns and decision-making processes.

    These skills become more urgent as AI systems grow more sophisticated and ubiquitous. They’re being integrated in everything from search to social media to creative software.

    When someone generates an image, writes with AI assistance or relies on algorithmic recommendations, they’re entering a collaborative relationship with a system that has particular biases, capabilities and blind spots.

    Rather than mindlessly adopting or reflexively rejecting these tools, we can develop critical AI literacy by exploring the Slopocene and witnessing what happens when AI tools “break”.

    This isn’t about becoming more efficient AI users. It’s about maintaining agency in relationships with systems designed to be persuasive, predictive and opaque.


    This article was originally published on The Conversation on 1 July, 2025. Read the article here.

  • Grotesque fascination

    A few weeks back, some colleagues and I were invited to share some new thoughts and ideas on the theme of ‘ecomedia’, as a lovely and unconventional way to launch Simon R. Troon’s newest monograph, Cinematic Encounters with Disaster: Realisms for the Anthropocene. Here’s what I presented; a few scattered scribblings on environmental imaginaries as mediated through AI.


    Grotesque Fascination:

    Reflections from my weekender in the uncanny valley

    In February 2024 OpenAI announced their video generation tool Sora. In the technical paper that accompanied this announcement, they referred to Sora as a ‘world simulator’. Not just Sora, but also DALL-E or Runway or Midjourney, all of these AI tools further blur and problematise the lines between the real and the virtual. Image and video generation tools re-purpose, re-contextualise, and re-gurgitate how humans perceive their environments and those around them. These tools offer a carnival mirror’s reflection on what we privilege, prioritise, and what we prejudice against in our collective imaginations. In particular today, I want to talk a little bit about how generative AI tools might offer up new ways to relate to nature, and how they might also call into question the ways that we’ve visualized our environment to date.

    AI media generators work from datasets that comprise billions of images, as well as text captions, and sometimes video samples; the model maps all of this information using advanced mathematics in a hyper-dimensional space, sometimes called the latent space or a U-net. A random image of noise is then generated and fed through the model, along with a text prompt from the user. The model uses the text to gradually de-noise the image in a way that the model believes is appropriate to the given prompt.

    In these datasets, there are images of people, of animals, of built and natural environments, of objects and everyday items. These models can generate scenes of the natural world very convincingly. These generations remind me of the open virtual worlds in video games like Skyrim or Horizon: Zero Dawn: there is a real, visceral sense of connection for these worlds as you move through them. In a similar way, when you’re playing with tools like Leonardo or MidJourney, there can often be visceral, embodied reactions to the images or media that they generate: Shane Denson has written about this in terms of “sublime awe” and “abject cringe”. Like video games, too, AI Media Generators allow us to observe worlds that we may never see in person. Indeed, some of the landscapes we generate may be completely alien or biologically impossible, at least on this planet, opening up our eyes to different ecological possibilities or environmental arrangements. Visualising or imagining how ecosystems might develop is one way of potentially increasing awareness of those that are remote, unexplored or endangered; we may also be able to imagine how the real natural world might be impacted by our actions in the distant future. These alien visions might also, I suppose, prepare us for encountering different ecosystems and modes of life and biology on other worlds.

    But it’s worth considering, though, how this re-visualisation, virtualisation, re-constitution of environments, be they realistic or not, might change, evolve or hinder our collective mental image, or our capacity to imagine what constitutes ‘Nature’. This experience of generating ecosystems and environments may increase appreciation for our own very real, very tangible natural world and the impacts that we’re having on it, but like all imagined or technically-mediated processes there is always a risk of disconnecting people from that same very real, very tangible world around them. They may well prefer the illusion; they may prefer some kind of perfection, some kind of banal veneer that they can have no real engagement with or impact on. And it’s easy to ignore the staggering environmental impacts of the technology companies pushing these tools when you’re engrossed in an ecosystem of apps and not of animals.

    In previous work, I proposed the concept of virtual environmental attunement, a kind of hyper-awareness of nature that might be enabled or accelerated by virtual worlds or digital experiences. I’m now tempted to revisit that theory in terms of asking how AI tools problematise that possibility. Can we use these tools to materialise or make perceptible something that is intangible, virtual, immaterial? What do we gain or lose when we conceive or imagine, rather than encounter and experience?

    Machine vision puts into sharp relief the limitations of humanity’s perception of the world. But for me there remains a certain romance and beauty and intrigue — a grotesque fascination, if you like — to living in the uncanny valley at the moment, and it’s somewhere that I do want to stay a little bit longer. This is despite the omnipresent feeling of ickiness and uncertainty when playing with these tools, while the licensing of the datasets that they’re trained on remains unclear. For now, though, I’m trying to figure out how connecting with the machine-mind might give some shape or sensation to a broader feeling of dis-connection.

    How my own ideas and my capacity to imagine might be extended or supplemented by these tools, changing the way I relate to myself and the world around me.

  • All the King’s horses

    Seems about right. Generated with Leonardo.Ai, prompts by me.

    I’ve written previously about the apps I use. When it comes to actual productivity methods, though, I’m usually in one of (what I hope are only) two modes: Complicate Mode (CM) or Simplify Mode (SM).

    CM can be fun because it’s not always about a feeling of overwhelm, or over-complicating things. In its healthier form it might be learning about new modes and methods, discovering new ways I could optimise, satiating my manic monkey brain with lots of shiny new tools, and generally wilfully being in the weeds of it all.

    However CM can also really suck, because it absolutely can feel overwhelming, and it can absolutely feel like I’m lost in the weeds, stuck in the mud, too distracted by the new systems and tools and not actually doing anything. CM can also feel like a plateau, like nothing is working, like the wheels are spinning and I don’t know how to get traction again.

    By contrast, SM usually arrives just after one of these stuck-in-the-mud periods, when I’m just tired and over it. I liken it to a certain point on a long flight. I’m a fairly anxious flyer. Never so much that it’s stopped me travelling, but it’s never an A1 top-tier experience for me. However, on a long-haul flight, usually around 3-5 hours in, it feels like I just ‘run out’ of stress. I know this isn’t what’s actually happening, but it seems like I worked myself up too much, and my body just calms itself enough to be resigned to its situation. And then I’m basically just tired and bored for the remainder of the trip.

    So when I’ve had a period of overwhelm, a period of not getting things done, this usually coincides with CM. I say to myself, “If I can just find the right system, tool, method, app, hack, I’ll get out of this rut.” This is bad CM. Not-healthy CM. Once I’m out of that, though (which, for future self-reference, is never as a result of a Shiny New Thing), I’ll usually slide into SM, when I want to ease out of that mode, take care of myself a bit, be realistic, and strip things back to basics. This is usually not just in terms of productivity/work, but usually extends to overall wellbeing, relationships, creativity, lifestyle, fun: all the non-work stuff, basically.

    The first sign I’m heading into SM is that I’ll unsubscribe from a bunch of app subscriptions (and reading/watching subscriptions too), go back through my bank history to make sure I’m not being charged for anything I’m not into or actively using right now, and note down some simple short-term lifestyle goals (e.g. try to get to the gym in the next few days, meditate every other day, go touch grass or look at a body of water once a week etc). In terms of work, it’s equally simple: try to pick a couple of simple tasks to achieve each day (usually not very brain-heavy) and one large task for the next week/fortnight that I spend a little time on each workday as one of those simple smaller tasks. For instance, I might be working on a journal article; so spending a little time on this during SM might not be writing, per se, but maybe consolidating references, or doing a little reading and note-taking for references I already have but haven’t utilised, or even just a spell-check of what I’ve done so far.

    Phase 1 of SM is usually the above, which I tend to do unconsciously after weeks of stressing myself out and running myself ragged and somehow still doing the essentials of life and work, despite shaving hours, if not days, off my life. Basically, Phase 1 of SM constitutes a bunch of exceptionally good and healthy things to do that I probably should do more regularly to cut off stressful times at the pass; thanks self-preservation brain!

    In terms of strictly productivity, though, SM has previously meant chucking it all in and going back to pen and paper, or chucking in pen and paper and going all in on digital tools (or just one digital tool, which has never worked bro so stop trying it). An even worse thing to do is to go all in on a single new productivity system. This usually takes up a whole day (sometimes two) where I could be either doing shit, or trying to spend quality time figuring out more accurately why shit isn’t getting done, or — probably more to the point — putting everything to one side and giving myself an actual break.

    I’ve had one or two moments of utter desperation, when nothing at all seems like it’s working, when I’ve tried CM and SM and every-other-M to no avail; I’ve even tried taking a bit of a break, but needs must when it comes to somehow just pushing on for whatever reason (personal, financial, professional, psychological, etc). In these moments I’ve had to do a pretty serious and comprehensive life audit. Basically, it’s either whatever note-taking app I see first on my phone, or piece of paper (preferably larger than A4/letter and a bunch of textas, or even just whole bunch of post-it’s and a dream. Make a hot beverage or fill up that water bottle, sit down at desk, dining table, lie in bed or on the floor, and go for it.

    Life Audit Part 1: Commitments and needs/wants

    What are your primary commitments? Your main stressors right now? What are your other stressors? Who are you accountable to/for, or responsible for right now? What do you need to be doing (but actually really need, not just think you need) in only the short-term? What do you want to be doing? What are you paying for right now, obviously financially, but what about physically? Psychologically?

    Life Audit Part 2: Sit Rep

    As it stands right now, how are you answering all the questions from Part 1? Are you kinda lying to yourself about what’s most important? How on earth did you get to the place where you think X is more important than Y? What can you remove from this map to simplify things right now? (Don’t actually remove them, just note down somewhere what you could remove.)

    Life Audit Part 3: Tweak and Adjust

    What tools, systems, methods — if any — do you have in place to cope with any of the foregoing? If you have a method/methods, are they really working? What might you tweak/change/add/remove to streamline or improve this system? If you don’t have any systems right now, what simple approach could you try as a light touch in the coming days or weeks? This could be as simple as blocking out your work time and personal time as work time and personal time, and setting a calendar reminder to try and keep to those times. If you struggle to rest or to give time to important people in your life; why? If your audit is richly developed or super-connected around personal development or lifestyle, or around professional commitments, maybe you need to carve out some time (or not even time, just some headspace) to note down how you can reorient yourself.

    The life audit might be refreshing or energising for some folx, and that’s awesome. For me, though, doing this was taxing. Exhausting. Sometimes debilitating. Maybe doing it more regularly would help, but it really surfaced patterns of thinking and behaviour that had cost me greatly in terms of well-being, welfare, health, time, money, and more besides. So take this as a bit of a disclaimer or warning. It might be good to raise this idea with a loved one or health-type person (GP, psych, religious advisor, etc) before attempting.

    Similarly, maybe a bit of a further disclaimer here. I have read a lot about productivity methods, modes, approaches, gurus, culture, media, and more. I think productivity is something of a myth, and it can also be toxic and dangerous. My personal journey in productivity media and culture has been both a professional interest and a personal interest (at times, obsession). My system probably won’t work for you or anyone really. I’ve learned to tweak, to leave to one side, to adjust and change when needed, and to just drop any pretense of being ‘productive’ if it just ain’t happening.

    Productivity and self-optimisation and their attendant culture are by-products of a capitalist system1. When we buy into it — psychologically, professionally, or financially — we propagate and perpetuate that system, with its prejudices, its injustices, its biases, and its genuine harms. We might kid ourselves that it’s just for us, it’s just the tonic we need to get going, to be a better employee, partner, friend, or whatever; but when it all boils down to it, we’re human. We’re animals. We’re fallible. There are no hacks, there are no shortcuts, and honestly, when it boils down to it, you just have to do the work. And that work is often hard and/or boring and/or time-consuming. I am finally acknowledging and owning this for myself after several years of ignorance. It’s the least any of us can do if we care.


    This post is a line in the sand with my personal journey. To end a chapter. Turn a page. To think through what I’ve tried at various times; to try and give little names and labels to approaches and little recovery methods that I think have been most effective, so that I can just pick them up in future as a little package, a little pill to quickly swallow, rather than inefficiently stumbling my way back to the same solutions via Stress Alley and Burnout Junction.

    Moving forward, I also want to linger a little longer in the last couple of paragraphs. But for real this time. It’s easy to say that I believe in slowing down, in valuing life and whatever it brings me, to just spend time: not doing anything necessarily, but certainly not worrying about whether or not I’m being productive or doing the right thing.

    I want to have a simple system that facilitates my being the kind of employee I want to be; the kind of colleague I want to be; the partner I want to be; the immediate family member (e.g. child, parent, grandchild etc) I want to be; the citizen, human I want to be. This isn’t some lofty ambition talking. I’m realistic about how much space in the world I am taking up: it’s both more than I ever have, but also far from as much as those people (you know who I mean). I want time and space to work on being all of these people, while also — hopefully — making some changes to leave things in a slightly better way than I found them.

    How’s that for a system?

    Notes

    1. For an outstanding breakdown of what I mean by this, please read Melissa Gregg’s excellent monograph Counterproductive: Time Management in the Knowledge Economy. ↩︎