The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: imagination

  • Why can’t you just THINK?!

    Image generated by Leonardo.Ai, 20 May 2025; prompt by me.

    “Just use your imagination” / “Try thinking like a normal person”

    There is this wonderful reactionary nonsense flying around that making use of generative AI is an excuse, that it’s a cop-out, that it’s dumbing down society, that it’s killing our imaginations and the rest of what makes us human. That people need AI because they lack the ability to come up with fresh new ideas, or to make connections between them. I’ve seen this in social posts, videos, reels, and comments, not to mention Reddit threads, and in conversation with colleagues and students.

    Now — this isn’t to say that some uses of generative AI aren’t light-touch, or couldn’t just as easily be done with tools or methods that have worked fine for decades. Nor is it to say that generative AI doesn’t have its problems: misinformation/hallucination, data ethics, and environmental impacts.

    But what I would say is that for many people, myself very much included, thinking, connecting, synthesising, imagine — these aren’t the problem. What creatives, knowledge workers, artists often struggle with — not to mention those with different brain wirings for whom the world can be an overwhelming place just as a baseline — is:

    1. stopping or slowing the number of thoughts, ideas, imaginings, such that we can
    2. get them into some kind of order or structure, so we can figure out
    3. what anxieties, issues, and concerns are legitimate or unwarranted, and also
    4. which ideas are worth developing, to then
    5. create strategies to manage or alleviate the anxieties while also
    6. figuring out how to develop and build on the good ideas

    For some, once you reach step f., there’s still the barrier of starting. For those OK with starting, there’s the problem of carrying on, of keeping up momentum, or of completing and delivering/publishing/sharing.

    I’ve found generative AI incredibly helpful for stepping me through one or more of these stages, for body-doubling and helping me stop and celebrate wins, suggesting or triggering moments of rest or recovery, and for helping me consolidate and keep track of progress across multiple tasks, projects, and headspaces — both professionally and personally. Generative AI isn’t necessarily a ‘generator’ for me, but rather a clarifier and companion.

    If you’ve tested or played with genAI and it’s not for you, that’s fine. That’s an informed and logical choice. But if you haven’t tested any tools at all, here’s a low-stakes invitation to do so, with three ways to see how it might help you out.

    You can try these prompts and workflows in ChatGPT, Claude, Copilot, Gemini, or another proprietary model, but note, too, that using genAI doesn’t have to mean selling your soul or your data. Try an offline host like LMStudio or GPT4All, where you can download models to run locally — I’ve added some suggested models to download and run offline. If you’re not confident about your laptop’s capacity to run (or if in trying them things get real sloooooow), you can try many of these independent models via HuggingChat (HuggingFace account required for some features/saved chats).

    These helpers are designed as light-weight executive/creative assistants — not hacks or cheats or shortcuts or slop generators, but rather frames or devices for everyday thinking, planning, feeling. Some effort and input is required from you to make these work: this isn’t about replacing workload, effort, thought, contextualising or imagination, but rather removing blank page terror, or context-switching/decision fatigue.

    If these help, take (and tweak) them. If not, no harm done. Just keep in mind: not everyone begins the day with clarity, capacity, or calm — and sometimes, a glitchy little assistant is just what’s needed to tip the day in our favour.


    PS: If these do help — and even if they didn’t — tell me in the comments. Did you tweak or change? Happy to post more on developing and consolidating these helpers, such as through system prompts. (See also: an earlier post on my old Claude set-up.)



    Helper 1: Daily/Weekly Planner + Reflector

    Prompt:

    Here’s a list of my tasks and appointments for today/this week:
    [PASTE LIST]

    Based on this and knowing I work best in [e.g. mornings / 60-minute blocks / pomodoro technique / after coffee], arrange my day/s into loose work blocks [optional: between my working hours of e.g. 9:30am – 5:30pm].

    Then, at the end of the day/week, I’ll paste in what I completed. When I do that, summarise what was achieved, help plan tomorrow/next week based on unfinished tasks, and give me 2–3 reflection questions or journaling prompts.

    Follow-up (end of day/week):

    Here’s what I completed today/this week:
    [PASTE COMPLETED + UNFINISHED TASKS]

    Please summarise the day/week, help me plan tomorrow/next week, and give me some reflection/journalling prompts.

    Suggested offline models:

    • Mistral-7B Instruct (Q4_K_M GGUF) — low-medium profile model for mid-range laptops; good with planning, lists, and reflection prompts when given clear instructions
    • OpenHermes-2.5 Mistral — stronger reasoning and better output formatting; better at handling multi-step tasks and suggesting reflection angles



    Helper 2: Brain Dump Sorter

    Prompt:

    Here’s a raw brain-dump of my thoughts, ideas, frustrations, and feelings:
    [PASTE DUMP HERE — I suggest dictating into a note to avoid self-editing]

    Please:

    1. Pull out any clear ideas or recurring themes
    2. Organise them into loose categories (e.g. creative ideas, anxieties, to-dos, emotional reflections)
    3. Suggest any small actions or helpful rituals to follow up, especially if anything seems urgent, stuck, or energising.

    Suggested offline models:

    • Nous-Hermes-2 Yi 6B — a mini-model (aka small language model, or at least a LLM that’s smaller-than-most!) that has good abilities in organisation and light sorting-through of emotions, triggers, etc. Good for extracting themes, patterns, and light structuring of chaotic input.
    • MythoMax-L2 13B — Balanced emotional tone, chaos-wrangling, and action-oriented suggestions. Handles fuzzy or frazzled or fragmented brain-dumps well; has a nice, easygoing but also pragmatic and constructive persona.



    Helper 3: Creative Block / Paralysis

    Prompt:

    I’m feeling blocked/stuck. Here’s what’s going on:
    [PASTE THOUGHTS — again, dictation recommended]

    Please:

    • Respond supportively, as if you’re a gentle creative coach or thoughtful friend
    • Offer 2–3 possible reframings or reminders
    • Give me a nudge or ritual to help me shift (e.g. a tiny task, reflection, walk, freewrite, etc.)

    You don’t have to solve everything — just help me move one inch forward or step back/rest meaningfully.

    Suggested offline models:

    • TinyDolphin-2.7B (on GGUF or GPTQ) — one of my favourite mini-models: surprisingly gentle, supportive, and adaptive if well-primed. Not big on poetry or ritual, but friendly and low-resource.
    • Neural Chat 7B (based on Qwen by Alibaba) — fine-tuned for conversation, reflection, introspection; performs well with ‘sounding board’ type prompts, good as a coach or helper, won’t assume immediate action, urgency or priority
  • Grotesque fascination

    A few weeks back, some colleagues and I were invited to share some new thoughts and ideas on the theme of ‘ecomedia’, as a lovely and unconventional way to launch Simon R. Troon’s newest monograph, Cinematic Encounters with Disaster: Realisms for the Anthropocene. Here’s what I presented; a few scattered scribblings on environmental imaginaries as mediated through AI.


    Grotesque Fascination:

    Reflections from my weekender in the uncanny valley

    In February 2024 OpenAI announced their video generation tool Sora. In the technical paper that accompanied this announcement, they referred to Sora as a ‘world simulator’. Not just Sora, but also DALL-E or Runway or Midjourney, all of these AI tools further blur and problematise the lines between the real and the virtual. Image and video generation tools re-purpose, re-contextualise, and re-gurgitate how humans perceive their environments and those around them. These tools offer a carnival mirror’s reflection on what we privilege, prioritise, and what we prejudice against in our collective imaginations. In particular today, I want to talk a little bit about how generative AI tools might offer up new ways to relate to nature, and how they might also call into question the ways that we’ve visualized our environment to date.

    AI media generators work from datasets that comprise billions of images, as well as text captions, and sometimes video samples; the model maps all of this information using advanced mathematics in a hyper-dimensional space, sometimes called the latent space or a U-net. A random image of noise is then generated and fed through the model, along with a text prompt from the user. The model uses the text to gradually de-noise the image in a way that the model believes is appropriate to the given prompt.

    In these datasets, there are images of people, of animals, of built and natural environments, of objects and everyday items. These models can generate scenes of the natural world very convincingly. These generations remind me of the open virtual worlds in video games like Skyrim or Horizon: Zero Dawn: there is a real, visceral sense of connection for these worlds as you move through them. In a similar way, when you’re playing with tools like Leonardo or MidJourney, there can often be visceral, embodied reactions to the images or media that they generate: Shane Denson has written about this in terms of “sublime awe” and “abject cringe”. Like video games, too, AI Media Generators allow us to observe worlds that we may never see in person. Indeed, some of the landscapes we generate may be completely alien or biologically impossible, at least on this planet, opening up our eyes to different ecological possibilities or environmental arrangements. Visualising or imagining how ecosystems might develop is one way of potentially increasing awareness of those that are remote, unexplored or endangered; we may also be able to imagine how the real natural world might be impacted by our actions in the distant future. These alien visions might also, I suppose, prepare us for encountering different ecosystems and modes of life and biology on other worlds.

    But it’s worth considering, though, how this re-visualisation, virtualisation, re-constitution of environments, be they realistic or not, might change, evolve or hinder our collective mental image, or our capacity to imagine what constitutes ‘Nature’. This experience of generating ecosystems and environments may increase appreciation for our own very real, very tangible natural world and the impacts that we’re having on it, but like all imagined or technically-mediated processes there is always a risk of disconnecting people from that same very real, very tangible world around them. They may well prefer the illusion; they may prefer some kind of perfection, some kind of banal veneer that they can have no real engagement with or impact on. And it’s easy to ignore the staggering environmental impacts of the technology companies pushing these tools when you’re engrossed in an ecosystem of apps and not of animals.

    In previous work, I proposed the concept of virtual environmental attunement, a kind of hyper-awareness of nature that might be enabled or accelerated by virtual worlds or digital experiences. I’m now tempted to revisit that theory in terms of asking how AI tools problematise that possibility. Can we use these tools to materialise or make perceptible something that is intangible, virtual, immaterial? What do we gain or lose when we conceive or imagine, rather than encounter and experience?

    Machine vision puts into sharp relief the limitations of humanity’s perception of the world. But for me there remains a certain romance and beauty and intrigue — a grotesque fascination, if you like — to living in the uncanny valley at the moment, and it’s somewhere that I do want to stay a little bit longer. This is despite the omnipresent feeling of ickiness and uncertainty when playing with these tools, while the licensing of the datasets that they’re trained on remains unclear. For now, though, I’m trying to figure out how connecting with the machine-mind might give some shape or sensation to a broader feeling of dis-connection.

    How my own ideas and my capacity to imagine might be extended or supplemented by these tools, changing the way I relate to myself and the world around me.