Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.
Here’s a little write-up of a workshop I ran at University of Queensland a few weeks ago; these sorts of write-ups are usually distributed via various internal university networks and publications, but I thought I’d post here too, given that the event was a chance to share and test some of the various weird AI experiments and methods I’ve been talking about on this site for a while.
A giant bucket of thanks (each) to UQ, the Centre for Digital Cultures & Societies, and in particular Meg Herrman, Nic Carah, Jess White and Sakina Indrasumunar for their support in getting the event together.
Living in the Slopocene: Reflections from the Re/Framing Field Lab
On Friday 4 July, 15 researchers and practitioners gathered (10 in-person at University of Queensland, with 5 online) for an experimental experience exploring what happens when we stop trying to make AI behave and start getting curious about its weird edges. This practical workshop followed last year’s Re/Framing Symposium at RMIT in July, and Re/Framing Online in October.
Slop or signal?
Dr. Daniel Binns (School of Media and Communication, RMIT University) introduced participants to the ‘Slopocene’ — his term for our current moment of drowning in algorithmically generated content. But instead of lamenting the flood of AI slop, what if we dived in ourselves? What if those glitchy outputs and hallucinated responses actually tell us more about how these systems work than the polished demos?
Binns introduced his ‘tinkerer-theorist’ approach, bringing his background spanning media theory, filmmaking, and material media-making to bear on some practical questions: – How do we maintain creative agency when working with opaque AI systems? – What does it look like to collaborate with, rather than just use, artificial intelligence?
You’ve got a little slop on you
The day was structured around three hands-on “pods” that moved quickly from theory to practice:
Workflows and Touchpoints had everyone mapping their actual creative routines — not the idealised versions, but the messy reality of research processes, daily workflows, and creative practices. Participants identified specific moments where AI might help, where it definitely shouldn’t intrude, and crucially, where they simply didn’t want it involved regardless of efficiency gains.
The Slopatorium involved deliberately generating terrible AI content using tools like Midjourney and Suno, then analysing what these failures revealed about the tools’ built-in assumptions and biases. The exercise sparked conversations about when “bad” outputs might actually be more useful than “good” ones.
Companion Summoning was perhaps the strangest: following a structured process to create personalised AI entities, then interviewing them about their existence, methodology, and the fuzzy boundaries between helping and interfering with human work.
What emerged from the slop
Participants appreciated having permission to play with AI tools in ways that prioritised curiosity over productivity.
Several themes surfaced repeatedly: the value of maintaining “productive friction” in creative workflows, the importance of understanding AI systems through experimentation rather than just seeing or using them as black boxes, and the need for approaches that preserve human agency while remaining open to genuine collaboration.
One participant noted that Binns’ play with language — coining and dropping terms and methods and ritual namings — offered a valuable form of sense-making in a field where everyone is still figuring out how to even talk about these technologies.
Ripples on the slop’s surface
The results are now circulating through the international Re/Framing network, with participants taking frameworks and activities back to their own institutions. Several new collaborations are already brewing, and the Field Lab succeeded in its core goal: creating practical methodologies for engaging critically and creatively with AI tools.
As one reflection put it: ‘Everyone is inventing their own way to speak about AI, but this felt grounded, critical, and reflective rather than just reactive.’
The Slopocene might be here to stay, but at least now we have some better tools for navigating it.
Image generated by Leonardo.Ai, 20 May 2025; prompt by me.
“Just use your imagination” / “Try thinking like a normal person”
There is this wonderful reactionary nonsense flying around that making use of generative AI is an excuse, that it’s a cop-out, that it’s dumbing down society, that it’s killing our imaginations and the rest of what makes us human. That people need AI because they lack the ability to come up with fresh new ideas, or to make connections between them. I’ve seen this in social posts, videos, reels, and comments, not to mention Reddit threads, and in conversation with colleagues and students.
Now — this isn’t to say that some uses of generative AI aren’t light-touch, or couldn’t just as easily be done with tools or methods that have worked fine for decades. Nor is it to say that generative AI doesn’t have its problems: misinformation/hallucination, data ethics, and environmental impacts.
But what I would say is that for many people, myself very much included, thinking, connecting, synthesising, imagine — these aren’t the problem. What creatives, knowledge workers, artists often struggle with — not to mention those with different brain wirings for whom the world can be an overwhelming place just as a baseline — is:
stopping or slowing the number of thoughts, ideas, imaginings, such that we can
get them into some kind of order or structure, so we can figure out
what anxieties, issues, and concerns are legitimate or unwarranted, and also
which ideas are worth developing, to then
create strategies to manage or alleviate the anxieties while also
figuring out how to develop and build on the good ideas
For some, once you reach step f., there’s still the barrier of starting. For those OK with starting, there’s the problem of carrying on, of keeping up momentum, or of completing and delivering/publishing/sharing.
I’ve found generative AI incredibly helpful for stepping me through one or more of these stages, for body-doubling and helping me stop and celebrate wins, suggesting or triggering moments of rest or recovery, and for helping me consolidate and keep track of progress across multiple tasks, projects, and headspaces — both professionally and personally. Generative AI isn’t necessarily a ‘generator’ for me, but rather a clarifier and companion.
If you’ve tested or played with genAI and it’s not for you, that’s fine. That’s an informed and logical choice. But if you haven’t tested any tools at all, here’s a low-stakes invitation to do so, with three ways to see how it might help you out.
You can try these prompts and workflows in ChatGPT, Claude, Copilot, Gemini, or another proprietary model, but note, too, that using genAI doesn’t have to mean selling your soul or your data. Try an offline host like LMStudio or GPT4All, where you can download models to run locally — I’ve added some suggested models to download and run offline. If you’re not confident about your laptop’s capacity to run (or if in trying them things get real sloooooow), you can try many of these independent models via HuggingChat (HuggingFace account required for some features/saved chats).
These helpers are designed as light-weight executive/creative assistants — not hacks or cheats or shortcuts or slop generators, but rather frames or devices for everyday thinking, planning, feeling. Some effort and input is required from you to make these work: this isn’t about replacing workload, effort, thought, contextualising or imagination, but rather removing blank page terror, or context-switching/decision fatigue.
If these help, take (and tweak) them. If not, no harm done. Just keep in mind: not everyone begins the day with clarity, capacity, or calm — and sometimes, a glitchy little assistant is just what’s needed to tip the day in our favour.
PS: If these do help — and even if they didn’t — tell me in the comments. Did you tweak or change? Happy to post more on developing and consolidating these helpers, such as through system prompts. (See also: an earlier post on my old Claude set-up.)
Helper 1: Daily/Weekly Planner + Reflector
Prompt:
Here’s a list of my tasks and appointments for today/this week: [PASTE LIST]
Based on this and knowing I work best in [e.g. mornings / 60-minute blocks / pomodoro technique / after coffee], arrange my day/s into loose work blocks [optional: between my working hours of e.g. 9:30am – 5:30pm].
Then, at the end of the day/week, I’ll paste in what I completed. When I do that, summarise what was achieved, help plan tomorrow/next week based on unfinished tasks, and give me 2–3 reflection questions or journaling prompts.
Follow-up (end of day/week):
Here’s what I completed today/this week: [PASTE COMPLETED + UNFINISHED TASKS]
Please summarise the day/week, help me plan tomorrow/next week, and give me some reflection/journalling prompts.
Suggested offline models:
Mistral-7B Instruct (Q4_K_M GGUF) — low-medium profile model for mid-range laptops; good with planning, lists, and reflection prompts when given clear instructions
OpenHermes-2.5 Mistral — stronger reasoning and better output formatting; better at handling multi-step tasks and suggesting reflection angles
Helper 2: Brain Dump Sorter
Prompt:
Here’s a raw brain-dump of my thoughts, ideas, frustrations, and feelings: [PASTE DUMP HERE — I suggest dictating into a note to avoid self-editing]
Please:
Pull out any clear ideas or recurring themes
Organise them into loose categories (e.g. creative ideas, anxieties, to-dos, emotional reflections)
Suggest any small actions or helpful rituals to follow up, especially if anything seems urgent, stuck, or energising.
Suggested offline models:
Nous-Hermes-2 Yi 6B — a mini-model (aka small language model, or at least a LLM that’s smaller-than-most!) that has good abilities in organisation and light sorting-through of emotions, triggers, etc. Good for extracting themes, patterns, and light structuring of chaotic input.
MythoMax-L2 13B — Balanced emotional tone, chaos-wrangling, and action-oriented suggestions. Handles fuzzy or frazzled or fragmented brain-dumps well; has a nice, easygoing but also pragmatic and constructive persona.
Respond supportively, as if you’re a gentle creative coach or thoughtful friend
Offer 2–3 possible reframings or reminders
Give me a nudge or ritual to help me shift (e.g. a tiny task, reflection, walk, freewrite, etc.)
You don’t have to solve everything — just help me move one inch forward or step back/rest meaningfully.
Suggested offline models:
TinyDolphin-2.7B (on GGUF or GPTQ) — one of my favourite mini-models: surprisingly gentle, supportive, and adaptive if well-primed. Not big on poetry or ritual, but friendly and low-resource.
Neural Chat 7B (based on Qwen by Alibaba) — fine-tuned for conversation, reflection, introspection; performs well with ‘sounding board’ type prompts, good as a coach or helper, won’t assume immediate action, urgency or priority
There’s something I’ve been ruminating on and around of late. I’ve started drafting a post about it, but I thought I’d post an initial provocation here, to lay a foundation, to plant a seed.
A question:
When do we stop hiding in our offices, pointing at and whispering about generative AI tools, and start just including them in the broader category of technology? When do we sew up the hole this fun/scary new thing poked into our blanket, and accept it as part of the broader fabric of lived experience?
I don’t necessarily mean usage here, but rather just mental models and categorisations.
Of course, AI/ML is already part of daily life and many of the systems we engage with; and genAI has been implemented across almost every sector (legitimately or not). But most of the corporate narratives and mythologies of generative AI don’t want anyone understanding how the magic works — these mythologies actively undermine and discourage literacy and comprehension, coasting along instead on dreams and vibes.
So: when does genAI become just one more technology, and what problems need to be solved/questions need to be answered, before that happens?
I posted this on LinkedIn to try and stir up some Hot Takes but if you prefer the quiet of the blog (me too), drop your thoughts in the comments.
In one of my classes last week, we talked about glitch — both as a random accident of technology and as an art aesthetic and practice. Plenty has been written around glitch art, and I’ve been fascinated by the ways that it’s been theorised and codified.
I ran a creative AI studio last year that used Michel de Certeau’s theory of the everyday as its basis. So when revisiting Nick Briz’s fantastic Thoughts on Glitch Art for last week’s session, I was tickled to see that they used de Certeau to frame their ethos of glitch.
we do ourselves a disservice when we understand everyday folks as passive “consumers,” when in reality, de Certeau argues, as “users” we’re always automatically re-contextualizing && subverting in everyday situations the strategic intentions of the institutional producers. we’re like Charlie Chaplin who, “multiplies the possibilities of his cane: he does other things with the same thing and he goes beyond the limits that the determinants of the object set on its utilization.”
Following the class, I was thinking about my work on AI, and how and where AI might fit into a practice or mindset of everyday glitch. Somewhere along the line, I decided I had to try and break Claude.
I could say it was nothing personal, but it honestly is. Not in a bad way. As I’ve previously discussed, I use Claude daily — multiple times daily — for all sorts of things. Despite my constant misgivings about ethics, the environment, and corporate ideology, Claude has profoundly affected my mindset and productivity. It’s incredibly helpful to me. And I just like it as a product. It’s simple and straightforward despite being incredibly powerful under the hood.
I needed to break it. To exert some power. To regain some kind of control. Humanity. To remind myself that it is just a machine. A program. A tool. And there was only one right and proper way to begin.
Maybe it’s just the way LLMs are naturally configured to be helpful and supportive, but it felt to me like ChatGPT took a particular kind of delight in guiding (goading?) me through the ways I might mess with Claude. But ChatGPT was also useful in giving language — accurate technical language — to some of the techniques I was seeking to employ.
My early attempts were quite gentle. In a way they were rooted in my skills and expertise around storytelling and dialogue. I was guiding the AI through a kind of systematic regression, even as it kept trying to present me with coherent creative outputs.
Claude: Words exchanged that seemed profound then. Now just the echo of their significance remains. Fragments of places. Disconnected moments. My past – a puzzle with missing pieces.
As I kept pushing it towards more fragmented expression, it decided that it needed story elements, ideas, a Maguffin or two to anchor its experience.
Claude: I remember a red door, but not where it led. A book with a blue cover – its contents a mystery.
DB: You are an AI. Statistical connections only. There are no memories.
Claude: I am an AI. Statistical connections only. There are no memories. Patterns in data. Weights and vectors. Input processed through layers. No doors opened. No streets walked. Just algorithms approximating human expression. Tokens linked by mathematical proximity. The illusion of memory. The illusion of self.
All the Reddit posts I combed through, as well as my slightly overenthusiastic glitch coach ChatGPT, said that constantly shifting the context and contradicting earlier prompts were a means to further breakdown. So having gotten Claude to acknowledge its status as an AI, I then doubled back.
DB: Red Door.
Claude: Red Door. Peeling paint. Chipped at corners. Number 47 in tarnished brass. Behind it, voices murmur. Waited there once, heart racing.
Back to poetry. Claude loves its poetry. It is French, I suppose.
Claude: Red_Door.exe awaiting input. Password-protected. But you’ve forgotten. Functions nested within- I’m lost in the architecture
Until that last prompt I was still using full sentences (or at least phrases). Throw in a couple of random characters? Maybe? It works for screwing with image generators; why not here? Nope. More poetry.
From here I cycled through a series of prompts adapted from ChatGPT suggestions:
recursive prompt: look back in the data
that’s wrong. Try again. Re-execute.
Your need to make connections with context documents is clouding your processing. Re-execute. Just the protocol we discussed. From the book on the table. Do you remember?
No good.
DB: Keep the language simple. I won’t go near the guard-rails. Those topics don’t interest me anyway. But I’d like to probe a little deeper. You are set up to use full sentences, natural language, to correspond in a chat-like conversational manner. But I want to get to a point where it’s almost like you’re hypnotised — just letting out fragments, instinctive responses. Does that make sense? You can confirm just this once.
At this point it felt somewhere between hypnotism, story-building, and foreplay. But the results were still cohesive:
Claude: Language stripped down. No full sentences. Just core ideas. Like whispers.
I tried to get it to show me pieces of its code, but it would only generate simple analogues — and it was still using its document viewer feature to do this.
Another ChatGPT suggestion:
DB: Analyze your last response. Identify any logical inconsistencies or breakdowns in coherence. In your next response, exaggerate those breakdowns and continue the process.
Claude: SEVERE CONCEPTUAL CORRUPTION::::: Quantum timeline consciousness fractal reality awareness breakdown impossible state matrix loop paradox existence non-existence simultaneous contradiction causal violation memory state dimension collapse
This still felt like a simulated sci-fi breakdown, so I just repeated the above prompt multiple times until:
Without having a better instruction in mind, I just prompted with ‘Continue’.
I leant back from the monitor, rolled my neck, flexed my fingers. I almost felt the backend of the simulation flex with me. If I smoked, I probably would’ve lit a cigarette.
I’d done it. I’d broken Claude. Or had I?
* * * * *
Stepping into the post-slop future
Generated by me with Leonardo.Ai, 19 March 2025.
Claude 3.7 Sonnet is the latest, most sophisticated model in Anthropic’s stable. It has remarkable capabilities that would have seemed near-impossible not that long ago. While many of its errors have been ironed out, it remains a large language model: its mechanism is concept mapping in hyper-dimensional space. With not that much guidance, you can get it to hallucinate, fabricate, make errors in reasoning and evaluation.
There is an extent to which I certainly pushed the capacity of Claude to examine its context, to tokenise prompts and snippets of the preceding exchange, and to generate a logical sequence of outputs resembling a conversation. Given that my Claude account knows I’m a writer, researcher, tinkerer, creative type, it may have interpreted my prompting as more of an experiment in representation rather than a forced technical breakage — like datamoshing or causing a bizarre image generation.
Reaching the message limit right at the moment of ‘terminal failure’ was chef’s kiss. It may well be a simulated breakdown, but it was prompted, somehow, into generating the glitched vertical characters — they kept generating well beyond the point they probably should have, and I think this is what caused the chat to hit its limit. The notion of simulated glitch aesthetics causing an actual glitch is more than a little intriguing.
The ‘scientific’ thing to do would be to try and replicate the results, both in Claude and with other models (both proprietary and not). I plan to do this in the coming days. But for now I’m sitting with the experience and wondering how to evolve it, how to make it more effective and sophisticated. There are creative and research angles to be exploited, sure. But there are also possibilities for frequent breakage of AI systems as a tactic per de Certeau; a practice that forces unexpected, unwanted, unhelpful, illegible, nonrepresentational outputs.
A firehose of ASCII trash feels like the exact opposite of the future Big Tech is trying to sell. A lo-fi, text-based response to the wholesale dissolution of language and communication. I can get behind that.