The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: ways of working

  • Why can’t you just THINK?!

    Image generated by Leonardo.Ai, 20 May 2025; prompt by me.

    “Just use your imagination” / “Try thinking like a normal person”

    There is this wonderful reactionary nonsense flying around that making use of generative AI is an excuse, that it’s a cop-out, that it’s dumbing down society, that it’s killing our imaginations and the rest of what makes us human. That people need AI because they lack the ability to come up with fresh new ideas, or to make connections between them. I’ve seen this in social posts, videos, reels, and comments, not to mention Reddit threads, and in conversation with colleagues and students.

    Now — this isn’t to say that some uses of generative AI aren’t light-touch, or couldn’t just as easily be done with tools or methods that have worked fine for decades. Nor is it to say that generative AI doesn’t have its problems: misinformation/hallucination, data ethics, and environmental impacts.

    But what I would say is that for many people, myself very much included, thinking, connecting, synthesising, imagine — these aren’t the problem. What creatives, knowledge workers, artists often struggle with — not to mention those with different brain wirings for whom the world can be an overwhelming place just as a baseline — is:

    1. stopping or slowing the number of thoughts, ideas, imaginings, such that we can
    2. get them into some kind of order or structure, so we can figure out
    3. what anxieties, issues, and concerns are legitimate or unwarranted, and also
    4. which ideas are worth developing, to then
    5. create strategies to manage or alleviate the anxieties while also
    6. figuring out how to develop and build on the good ideas

    For some, once you reach step f., there’s still the barrier of starting. For those OK with starting, there’s the problem of carrying on, of keeping up momentum, or of completing and delivering/publishing/sharing.

    I’ve found generative AI incredibly helpful for stepping me through one or more of these stages, for body-doubling and helping me stop and celebrate wins, suggesting or triggering moments of rest or recovery, and for helping me consolidate and keep track of progress across multiple tasks, projects, and headspaces — both professionally and personally. Generative AI isn’t necessarily a ‘generator’ for me, but rather a clarifier and companion.

    If you’ve tested or played with genAI and it’s not for you, that’s fine. That’s an informed and logical choice. But if you haven’t tested any tools at all, here’s a low-stakes invitation to do so, with three ways to see how it might help you out.

    You can try these prompts and workflows in ChatGPT, Claude, Copilot, Gemini, or another proprietary model, but note, too, that using genAI doesn’t have to mean selling your soul or your data. Try an offline host like LMStudio or GPT4All, where you can download models to run locally — I’ve added some suggested models to download and run offline. If you’re not confident about your laptop’s capacity to run (or if in trying them things get real sloooooow), you can try many of these independent models via HuggingChat (HuggingFace account required for some features/saved chats).

    These helpers are designed as light-weight executive/creative assistants — not hacks or cheats or shortcuts or slop generators, but rather frames or devices for everyday thinking, planning, feeling. Some effort and input is required from you to make these work: this isn’t about replacing workload, effort, thought, contextualising or imagination, but rather removing blank page terror, or context-switching/decision fatigue.

    If these help, take (and tweak) them. If not, no harm done. Just keep in mind: not everyone begins the day with clarity, capacity, or calm — and sometimes, a glitchy little assistant is just what’s needed to tip the day in our favour.


    PS: If these do help — and even if they didn’t — tell me in the comments. Did you tweak or change? Happy to post more on developing and consolidating these helpers, such as through system prompts. (See also: an earlier post on my old Claude set-up.)



    Helper 1: Daily/Weekly Planner + Reflector

    Prompt:

    Here’s a list of my tasks and appointments for today/this week:
    [PASTE LIST]

    Based on this and knowing I work best in [e.g. mornings / 60-minute blocks / pomodoro technique / after coffee], arrange my day/s into loose work blocks [optional: between my working hours of e.g. 9:30am – 5:30pm].

    Then, at the end of the day/week, I’ll paste in what I completed. When I do that, summarise what was achieved, help plan tomorrow/next week based on unfinished tasks, and give me 2–3 reflection questions or journaling prompts.

    Follow-up (end of day/week):

    Here’s what I completed today/this week:
    [PASTE COMPLETED + UNFINISHED TASKS]

    Please summarise the day/week, help me plan tomorrow/next week, and give me some reflection/journalling prompts.

    Suggested offline models:

    • Mistral-7B Instruct (Q4_K_M GGUF) — low-medium profile model for mid-range laptops; good with planning, lists, and reflection prompts when given clear instructions
    • OpenHermes-2.5 Mistral — stronger reasoning and better output formatting; better at handling multi-step tasks and suggesting reflection angles



    Helper 2: Brain Dump Sorter

    Prompt:

    Here’s a raw brain-dump of my thoughts, ideas, frustrations, and feelings:
    [PASTE DUMP HERE — I suggest dictating into a note to avoid self-editing]

    Please:

    1. Pull out any clear ideas or recurring themes
    2. Organise them into loose categories (e.g. creative ideas, anxieties, to-dos, emotional reflections)
    3. Suggest any small actions or helpful rituals to follow up, especially if anything seems urgent, stuck, or energising.

    Suggested offline models:

    • Nous-Hermes-2 Yi 6B — a mini-model (aka small language model, or at least a LLM that’s smaller-than-most!) that has good abilities in organisation and light sorting-through of emotions, triggers, etc. Good for extracting themes, patterns, and light structuring of chaotic input.
    • MythoMax-L2 13B — Balanced emotional tone, chaos-wrangling, and action-oriented suggestions. Handles fuzzy or frazzled or fragmented brain-dumps well; has a nice, easygoing but also pragmatic and constructive persona.



    Helper 3: Creative Block / Paralysis

    Prompt:

    I’m feeling blocked/stuck. Here’s what’s going on:
    [PASTE THOUGHTS — again, dictation recommended]

    Please:

    • Respond supportively, as if you’re a gentle creative coach or thoughtful friend
    • Offer 2–3 possible reframings or reminders
    • Give me a nudge or ritual to help me shift (e.g. a tiny task, reflection, walk, freewrite, etc.)

    You don’t have to solve everything — just help me move one inch forward or step back/rest meaningfully.

    Suggested offline models:

    • TinyDolphin-2.7B (on GGUF or GPTQ) — one of my favourite mini-models: surprisingly gentle, supportive, and adaptive if well-primed. Not big on poetry or ritual, but friendly and low-resource.
    • Neural Chat 7B (based on Qwen by Alibaba) — fine-tuned for conversation, reflection, introspection; performs well with ‘sounding board’ type prompts, good as a coach or helper, won’t assume immediate action, urgency or priority
  • Give me your answer, do

    By Ravi Kant on Pexels, 13 Mar 2018.

    For better or worse, I’m getting a bit of a reputation as ‘the AI guy’ in my immediate institutional sub-area. Depending on how charitable you’re feeling, this could be seen as very generous or wildly unfounded. I am not in any way a computer scientist or expert on emergent consciousness, synthetic cognition, language models, media generators, or even prompt engineering. I remain the same old film and media teacher and researcher I’ve always been. But I have always used fairly advanced technology as part of anything creative. My earliest memories are of typing up, decorating, and printing off books or banners or posters from my Dad’s old IBM computer. From there it was using PC laptops and desktops, and programs like Publisher or WordPerfect, 3D Movie Maker and Fine Artist, and then more media-specific tools at uni, like Final Cut and Pro Tools.

    Working constantly with computers, software, and apps, automatically turns you into something of a problem-solver—the hilarious ‘joke’ of media education is that the teachers have to be only slightly quicker than their students at Googling a solution. As well as problem-solving, I am predisposed to ‘daisy-chaining’. My introduction to the term was as a means of connecting multiple devices together—on Mac systems circa 2007-2017 this was fairly standard practice thanks to the inter-connectivity of Firewire cables and ports (though I’m informed that this is still common even through USB). Reflecting back on years of software and tool usage, though, I can see how I was daisy-chaining constantly. Ripping from CD or DVD, or capturing from tape, then converting to a useable format in one program, then importing the export to another program, editing or adjusting, exporting once again, then burning or converting et cetera et cetera. Even not that long ago, there weren’t exactly ‘one-stop’ solutions to media, in the same way that you might see an app like CapCut or Instagram in that way now.

    There’s also a kind of ethos to daisy-chaining. In shifting from one app, program, platform, or system, to another, you’re learning different ways of doing things, adapting your workflows each time, even if only subtly. Each interface presents you with new or different options, so you can apply a unique combination of visual, aural, and affective layers to your work. There’s also an ethos of independence: you are not locked in to one app’s way of doing things. You are adaptable, changeable, and you cherry-pick the best of what a variety of tools has to offer in order to make your work the best it can be. This is the platform economics argument, or the political platform economics argument, or some variant on all of this. Like everyone, I’ve spent many hours whinging about the time it took to make stuff or to get stuff done, wishing there was the ‘perfect app’ that would just do it all. But over time I’ve come to love my bundle of tools—the set I download/install first whenever I get a new machine (or have to wipe an old one); my (vomits) ‘stack’.

    * * * * *

    The above philosophy is what I’ve found myself doing with AI tools. I suppose out of all of them, I use Claude the most. I’ve found it the most straightforward in terms of setting up custom workspaces (what Claude calls ‘Projects’ and what ChatGPT calls ‘Custom GPTs’), and just generally really like the character and flavour of responses I get back. I like that it’s a little wordy, a little more academic, a little more florid, because that’s how I write and speak; though I suppose the outputs are not just encoded into the model, but also a mirror of how I’ve engaged with it. Right now in Claude I have a handful of projects set up:

    • Executive Assistant: Helps me manage my time, prioritise tasks, and keep me on track with work and creative projects. I’ve given it summaries of all my projects and commitments, so it can offer informed suggestions where necessary.
    • Research Assistant: I’ve given this most of my research outputs, as well as a curated selection of research notes, ideas, reference summaries, sometimes whole source texts. This project is where I’ll brainstorm research or teaching ideas, fleshing out concepts, building courses, etc
    • Creative Partner: This remains semi-experimental, because I actually don’t find AI that useful in this particular instance. However, this project has been trained on a couple of my produced media works, as well as a handful of creative ideas. I find the responses far too long to be useful, and often very tangential to what I’m actually trying to get out of it—but this is as much a project context and prompting problem as it is anything else.
    • 2 x Course Assistants: Two projects have been trained with all the materials related to the courses I’m running in the upcoming semester. These projects are used to brainstorm course structures, lesson plans, and even lecture outlines.
    • Systems Assistant: This is a little different to the Executive/Research Assistants, in that it is specifically set up around ‘systems’, so the various tools, methods, workflows that I use for any given task. It’s also a kind of ‘life admin’ helper in the sense of managing information, documents, knowledge, and so on. Now that I think of it, ‘Daisy’ would probably be a great name for this project—but then again

    I will often bounce ideas, prompts, notes between all of these different projects. How much this process corrupts the ‘purity’ of each individual project is not particularly clear to me, though I figure if it’s done in an individual chat instance it’s probably not that much of an issue. If I want to make something part of a given project’s ongoing working ‘knowledge’, I’ll put a summary somewhere in its context documents.

    But Claude is just one of the AI tools I use. I also have a bunch of language models on a hard drive that is always connected to my computer; I use these through the app GPT4All. These have similar functionality to Claude, ChatGPT, or any other proprietary/corporate LLM chatbot. Apart from the upper limit on their context windows, they have no usage limits; they run offline, privately, and at no cost. Their efficacy, though, is mixed. Llama and its variants are usually pretty reliable—though this is a Meta-built model, so there’s an accompanying ‘ick’ whenever I use it. Falcon, Hermes, and OpenOrca are independently developed, though these have taken quite some getting used to—I’ve also found that cloning them and training them on specific documents and with unique context prompts is the best way to use them.

    With all of these tools, I frequently jump between them, testing the same prompt across multiple models, or asking one model to generate prompts for another. This is a system of usage that may seem confusing at first glance, but is actually quite fluid. The outputs I get are interesting, diverse, and useful, rather than all being of the same ‘flavour’. Getting three different summaries of the same article, for example, lets me see what different models privilege in their ‘reading’—and then I’ll know which tool to use to target that aspect next time. Using AI in this way is still time-intensive, but I’ve found it much less laborious than repeatedly hammering at a prompt in a single tool trying to get the right thing. It’s also much more enjoyable, and feels more ‘human’, in the sense that you’re bouncing around between different helpers, all of whom have different strengths. The fail-rate is thus significantly lowered.

    Returning to ethos, using AI in this way feels more authentic. You learn more quickly how each tool functions, and what they’re best at. Jumping to different tools feels less like a context switch—as it might between software—and more like asking a different co-worker to weigh in. As someone who processes things through dialogue—be that with myself, with a journal, or with a friend or family member—this is a surprisingly natural way of working, of learning, and of creating. I may not be ‘the AI guy’ from a technical or qualifications standpoint, but I feel like I’m starting to earn the moniker at least from a practical, runs on the board perspective.

  • Inertia

    Photo by Alexander Zvir, via Pexels.

    Since the interminable Melbourne lockdowns and their horrific effect on the population of the city, my place of work has implemented ‘slow-down’ periods. These are usually timed around the usual holiday periods, e.g. Christmas, Easter, but there’s usually also a slowdown scheduled around mid-semester and mid-year breaks. The idea isn’t exactly to stop work (in this economy? ahahahaha no, peasant.) but rather to skip or postpone any non-essential meetings and spend time on focused work. Most often for teacher-researchers like myself, this constitutes catching up on marking assignments or prepping for the coming weeks of classes, though sometimes we can scrape up some time to think about long-gestating research projects or creative work. That’s the theory, anyway.

    I will say it’s nice to pause meetings for a week or two. The nature of academic work is (and should be) collaborative, dependent on bouncing ideas off others, working together to solve gnarly pedagogical issues, pooling resources to compile rich and nuanced ciritical work. But if you’re balancing teaching or coordination along with administrative or managerial duties, plus postgraduate supervisions and research stuff, it can be a lot of being on, a lot of just… people work. I’ll throw in a quick disclaimer here that I’m very lucky to have a bunch of lovely colleagues, and the vast majority of my students have been almost saccharinely delightful to work with. It can still be a lot, though, if you’re pretty woeful at scheduling around your energy levels, as I often am. Hashtag high achiever, hashtag people pleaser, hashtag burnout, hashtag hashtag etc etc etc.

    Academics are notorious for keeping weird hours, or for working too much, or for not having any boundaries around work and life. And I say this as someone who has embodied that stereotype with aplomb for years (even pre-academia, to be honest). I’ve had many conversations with colleagues where we bemoan working late into the evening, or over the weekend, or around other commitments. I’ve often been hard-pressed to find anyone who has any hard boundaries around work and not-work.

    Taking extended leave last year was the first time I’ve ever properly stopped working. No sneaky finishing of research projects, no brainstorming the next media class, no cheeky research reading, no emails. It showed me many things, but primarily how insidious work can be for someone with my disposition and approach to life in general. It is also insidious when you are passionate, and when you care. I care deeply about media education and research, and have become familiar with its rhythms and contours, its stresses and its delights, its (many) foibles and much deeper issues. I care about students and ensuring they feel not just ‘delivered to’ or ‘spoken at’, but rather that they’re exposed to new ways of thinking; inspired to learn well beyond graduation, indeed, to never stop learning; enabled and empowered to tell their stories, and whatever stories they want to tell. I care about producing research, e.g. journal articles, video essays, presentations and events, that is not tired, stale, staid, boring, dense, conventional, but rather is experimental, vibrant, connected, open-ended, and appeals broadly across multiple disciplines and outside the academy.

    I’m not alone here. As mentioned above, I have colleagues who almost universally feel exactly the same way. And I’ve built a local and international research network who share these passions and questions and concerns. A global support group. I’m very lucky and privileged in this way.

    But yeah: all this shit is fucking exhausting. The environment, the sector, the period, certainly doesn’t help. The current model of academia, university management, tertiary education, the industry/academy nexus, capitalism (in summary: neoliberalism), all of it is quite happy to capitalise on passion, on modern productivity dicta around never-being-done, irons-in-the-fire, publish or perish, manage it all or die, no life for you, hang the consequences and anyone you’re dealing with who isn’t work (e.g. partners, kids, friends, families). To anyone who says academics have a cushy job and get paid too much: kindly take yourself into the sea, thanks. That may have been true in the past, but we’re living on the other side of whatever spectrum you’re looking at.

    Suffice to say, slowdowns are nice. Taking proper breaks and/or having an executive echelon that genuinely supports and structures wellbeing and balance would be ideal, but beggars can’t be choosers.