The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: technology

  • Clearframe

    Detail of an image generated by Leonardo.Ai, 3 May 2025; prompt by me.

    An accidental anti-productivity productivity system

    Since 2023, I’ve been working with genAI chatbots. What began as a novelty—occasionally useful for a quick grant summary or newsletter edit—has grown into a flexible, light-touch system spanning Claude, ChatGPT, and offline models. Together, this ecosystem is closer to a co-worker, even a kind of assistant. In this process, I learned a great deal about how these enormous proprietary models work.

    Essentially, context is key—building up a collection of prompts or use cases, simple and iterable context/knowledge documents and system instructions, and testing how far back in the chat the model can go.

    With Claude, context is tightly controlled—you either have context within individual chats, or it’s contained within Projects—tailored, customised collections of chats that are ‘governed’ by umbrella system instructions and knowledge documents.

    This is a little different to ChatGPT, where context can often bleed between chats, aided and facilitated by its ‘memory’ functionality, which is a kind of blanket set of context notes.

    I have always struggled with time, focus, and task/project management and motivation—challenges later clarified by an ADHD diagnosis. Happily, though, it turns out that executive functioning is one thing that generative AI can do pretty well. Its own mechanisms are a kind of targeted looking—rapidly switching ‘attention heads’ from one set of conditions to the next, to check if input tokens match those conditions. And it turns out that with a bit of foundational work around projects, tasks, responsibilities, and so on, genAI can do much of the work of an executive assistant—maybe not locking in your meetings or booking travel, but with agentic AI this can’t be far off.

    You might start to notice patterns in your workflow, energy, or attention—or ask the model to help you explore them. You can map trends across weeks, months, and really start to get a sense of some of your key triggers and obstacles, and ask for suggestions for aids and supports.

    In one of these reflective moments, I went off on a tangent around productivity methods, systems overwhelm, and the lure of the pivot. I suggested lightly that some of these methods were akin to cults, with their strict doctrines and their acolytes and heretics. The LLM—used to my flights of fancy by this point and happy to riff—said this was an interesting angle, and asked if I wanted to spin it up into a blog post, academic piece, or something creative. I said creative, and that starting with a faux pitch from a culty productivity influencer would be a fun first step.

    I’d just watched The Institute, a 2013 documentary about the alternate reality game ‘The Jejeune Institute’, and fed in my thoughts around the curious psychology of willing suspension of disbelief, even when narratives are based in the wider world. The LLM knew about my studio this semester—a revised version of a previous theme on old/new media, physical experiences, liveness and presence; it suggested a digital tool, but on mentioning the studio it knew that I was after something analogue, something paper-based.

    We went back and forth in this way for a little while, until we settled on a ‘map’ of four quadrants. These four quadrants echoed themes from my work and interests: focus (what you’re attending to), friction (what’s in your way), drift (where your attention wants to go), and signal (what keeps breaking through).

    I found myself drawn to the simplicity of the system—somewhat irritating, given that this began with a desire to satirise these kinds of methods or approaches. But its tactile, hand-written form, as well as its lack of proscription in terms of what to note down or how to use it, made it attractive as a frame for reflecting on… on what? Again, I didn’t want this to be set in stone, to become a drag or a burden… so again, going back and forth with the LLM, we decided it could be a daily practice, or every other day, every other month even. Maybe it could be used for a specific project. Maybe you do it as a set-up/psych-up activity, or maybe it’s more for afterwards, to look back on how things went.

    So this anti-productivity method that I spun up with a genAI chatbot has actually turned into a low-stakes, low-effort means of setting up my days, or looking back on them. Five or six weeks in, there are weeks where I draw up a map most days, and others where I might do one on a Thursday or Friday or not at all.

    Clearframe was one of the names the LLM suggested, and I liked how banal it was, how plausible for this kind of method. Once the basic model was down, the LLM generated five modules—every method needs its handbook. There’s an Automata—a set of tables and prompts to help when you don’t know where to start, and even a card deck that grows organically based on patterns, signals, ideas.

    Being a lore- and world-builder, I couldn’t help but start to layer in some light background on where the system emerged, how glitch and serendipity are built in. But the system and its vernacular is so light-touch, so generic, that I’m sure you could tweak it to any taste or theme—art, music, gardening, sport, take your pick.

    Clearframe was, in some sense, a missing piece of my puzzle. I get help with other aspects of executive assistance through LLM interaction, or through systems of my own that pre-dated my ADHD diagnosis. What I consistently struggle to find time for, though, is reflection—some kind of synthesis or observation or wider view on things that keep cropping up or get in my way or distract me or inspire me. That’s what Clearframe allows.

    I will share the method at some stage—maybe in some kind of pay-what-you-want zine, mixed physical/digital, or RPG/ARG-type form. But for now, I’m just having fun playing around, seeing what emerges, and how it’s growing.

    Generative AI is both boon and demon—lauded in software and content production, distrusted or underused in academia and the arts. I’ve found that for me, its utility and its joy lies in presence, not precision: a low-stakes companion that riffs, reacts, and occasionally reveals something useful. Most of the time, it offers options I discard—but even that helps clarify what I do want. It doesn’t suit every project or person, for sure, but sometimes it accelerates an insight, flips a problem, or nudges you somewhere unexpected, like a personalised way to re-frame your day. AI isn’t sorcery, just maths, code, and language: in the right combo, though, these sure can feel like magic.

  • A question concerning technology

    Image by cottonbro studio on Pexels.

    There’s something I’ve been ruminating on and around of late. I’ve started drafting a post about it, but I thought I’d post an initial provocation here, to lay a foundation, to plant a seed.

    A question:

    When do we stop hiding in our offices, pointing at and whispering about generative AI tools, and start just including them in the broader category of technology? When do we sew up the hole this fun/scary new thing poked into our blanket, and accept it as part of the broader fabric of lived experience?

    I don’t necessarily mean usage here, but rather just mental models and categorisations.

    Of course, AI/ML is already part of daily life and many of the systems we engage with; and genAI has been implemented across almost every sector (legitimately or not). But most of the corporate narratives and mythologies of generative AI don’t want anyone understanding how the magic works — these mythologies actively undermine and discourage literacy and comprehension, coasting along instead on dreams and vibes.

    So: when does genAI become just one more technology, and what problems need to be solved/questions need to be answered, before that happens?

    I posted this on LinkedIn to try and stir up some Hot Takes but if you prefer the quiet of the blog (me too), drop your thoughts in the comments.

  • Give me your answer, do

    By Ravi Kant on Pexels, 13 Mar 2018.

    For better or worse, I’m getting a bit of a reputation as ‘the AI guy’ in my immediate institutional sub-area. Depending on how charitable you’re feeling, this could be seen as very generous or wildly unfounded. I am not in any way a computer scientist or expert on emergent consciousness, synthetic cognition, language models, media generators, or even prompt engineering. I remain the same old film and media teacher and researcher I’ve always been. But I have always used fairly advanced technology as part of anything creative. My earliest memories are of typing up, decorating, and printing off books or banners or posters from my Dad’s old IBM computer. From there it was using PC laptops and desktops, and programs like Publisher or WordPerfect, 3D Movie Maker and Fine Artist, and then more media-specific tools at uni, like Final Cut and Pro Tools.

    Working constantly with computers, software, and apps, automatically turns you into something of a problem-solver—the hilarious ‘joke’ of media education is that the teachers have to be only slightly quicker than their students at Googling a solution. As well as problem-solving, I am predisposed to ‘daisy-chaining’. My introduction to the term was as a means of connecting multiple devices together—on Mac systems circa 2007-2017 this was fairly standard practice thanks to the inter-connectivity of Firewire cables and ports (though I’m informed that this is still common even through USB). Reflecting back on years of software and tool usage, though, I can see how I was daisy-chaining constantly. Ripping from CD or DVD, or capturing from tape, then converting to a useable format in one program, then importing the export to another program, editing or adjusting, exporting once again, then burning or converting et cetera et cetera. Even not that long ago, there weren’t exactly ‘one-stop’ solutions to media, in the same way that you might see an app like CapCut or Instagram in that way now.

    There’s also a kind of ethos to daisy-chaining. In shifting from one app, program, platform, or system, to another, you’re learning different ways of doing things, adapting your workflows each time, even if only subtly. Each interface presents you with new or different options, so you can apply a unique combination of visual, aural, and affective layers to your work. There’s also an ethos of independence: you are not locked in to one app’s way of doing things. You are adaptable, changeable, and you cherry-pick the best of what a variety of tools has to offer in order to make your work the best it can be. This is the platform economics argument, or the political platform economics argument, or some variant on all of this. Like everyone, I’ve spent many hours whinging about the time it took to make stuff or to get stuff done, wishing there was the ‘perfect app’ that would just do it all. But over time I’ve come to love my bundle of tools—the set I download/install first whenever I get a new machine (or have to wipe an old one); my (vomits) ‘stack’.

    * * * * *

    The above philosophy is what I’ve found myself doing with AI tools. I suppose out of all of them, I use Claude the most. I’ve found it the most straightforward in terms of setting up custom workspaces (what Claude calls ‘Projects’ and what ChatGPT calls ‘Custom GPTs’), and just generally really like the character and flavour of responses I get back. I like that it’s a little wordy, a little more academic, a little more florid, because that’s how I write and speak; though I suppose the outputs are not just encoded into the model, but also a mirror of how I’ve engaged with it. Right now in Claude I have a handful of projects set up:

    • Executive Assistant: Helps me manage my time, prioritise tasks, and keep me on track with work and creative projects. I’ve given it summaries of all my projects and commitments, so it can offer informed suggestions where necessary.
    • Research Assistant: I’ve given this most of my research outputs, as well as a curated selection of research notes, ideas, reference summaries, sometimes whole source texts. This project is where I’ll brainstorm research or teaching ideas, fleshing out concepts, building courses, etc
    • Creative Partner: This remains semi-experimental, because I actually don’t find AI that useful in this particular instance. However, this project has been trained on a couple of my produced media works, as well as a handful of creative ideas. I find the responses far too long to be useful, and often very tangential to what I’m actually trying to get out of it—but this is as much a project context and prompting problem as it is anything else.
    • 2 x Course Assistants: Two projects have been trained with all the materials related to the courses I’m running in the upcoming semester. These projects are used to brainstorm course structures, lesson plans, and even lecture outlines.
    • Systems Assistant: This is a little different to the Executive/Research Assistants, in that it is specifically set up around ‘systems’, so the various tools, methods, workflows that I use for any given task. It’s also a kind of ‘life admin’ helper in the sense of managing information, documents, knowledge, and so on. Now that I think of it, ‘Daisy’ would probably be a great name for this project—but then again

    I will often bounce ideas, prompts, notes between all of these different projects. How much this process corrupts the ‘purity’ of each individual project is not particularly clear to me, though I figure if it’s done in an individual chat instance it’s probably not that much of an issue. If I want to make something part of a given project’s ongoing working ‘knowledge’, I’ll put a summary somewhere in its context documents.

    But Claude is just one of the AI tools I use. I also have a bunch of language models on a hard drive that is always connected to my computer; I use these through the app GPT4All. These have similar functionality to Claude, ChatGPT, or any other proprietary/corporate LLM chatbot. Apart from the upper limit on their context windows, they have no usage limits; they run offline, privately, and at no cost. Their efficacy, though, is mixed. Llama and its variants are usually pretty reliable—though this is a Meta-built model, so there’s an accompanying ‘ick’ whenever I use it. Falcon, Hermes, and OpenOrca are independently developed, though these have taken quite some getting used to—I’ve also found that cloning them and training them on specific documents and with unique context prompts is the best way to use them.

    With all of these tools, I frequently jump between them, testing the same prompt across multiple models, or asking one model to generate prompts for another. This is a system of usage that may seem confusing at first glance, but is actually quite fluid. The outputs I get are interesting, diverse, and useful, rather than all being of the same ‘flavour’. Getting three different summaries of the same article, for example, lets me see what different models privilege in their ‘reading’—and then I’ll know which tool to use to target that aspect next time. Using AI in this way is still time-intensive, but I’ve found it much less laborious than repeatedly hammering at a prompt in a single tool trying to get the right thing. It’s also much more enjoyable, and feels more ‘human’, in the sense that you’re bouncing around between different helpers, all of whom have different strengths. The fail-rate is thus significantly lowered.

    Returning to ethos, using AI in this way feels more authentic. You learn more quickly how each tool functions, and what they’re best at. Jumping to different tools feels less like a context switch—as it might between software—and more like asking a different co-worker to weigh in. As someone who processes things through dialogue—be that with myself, with a journal, or with a friend or family member—this is a surprisingly natural way of working, of learning, and of creating. I may not be ‘the AI guy’ from a technical or qualifications standpoint, but I feel like I’m starting to earn the moniker at least from a practical, runs on the board perspective.

  • New research published: The Allure of Artificial Worlds

    ‘Vapourwave Hall’, generated by me using Leonardo.Ai.

    This is a little late, as the article was actually released back in November, but due to swearing off work for a month over December and into the new year, I thought I’d hold off on posting here.

    This piece, ‘The Allure of Artificial Worlds‘, is my first small contribution to AI research — specifically, I look here at how the visions conjured by image and video generators might be considered their own kinds of worlds. There is a nod here, as well, to ‘simulative AI’, also known as agentic AI, which many feel may be the successor to generative AI tools operating singularly. We’ll see.


    Abstract

    With generative AI (genAI) and its outputs, visual and aural cultures are grappling with new practices in storytelling, artistic expression, and meme-farming. Some artists and commentators sit firmly on the critical side of the discourse, citing valid concerns around utility, longevity, and ethics. But more spurious judgements abound, particularly when it comes to quality and artistic value.

    This article presents and explores AI-generated audiovisual media and AI-driven simulative systems as worlds: virtual technocultural composites, assemblages of material and meaning. In doing so, this piece seeks to consider how new genAI expressions and applications challenge traditional notions of narrative, immersion, and reality. What ‘worlds’ do these synthetic media hint at or create? And by what processes of visualisation, mediation, and aisthesis do they operate on the viewer? This piece proposes that these AI worlds offer a glimpse of a future aesthetic, where the lines between authentic and artificial are blurred, and the human and the machinic are irrevocably enmeshed across society and culture. Where the uncanny is not the exception, but the rule.

  • De-platforming is hard

    Falling (detail), by me, 18 Nov 2024.

    I have two predilections that sometimes work hand in hand, and other times butt up against each other. The first is apps, tools, technology, all the shiny things; the second is a deep belief in supporting independent creators, developers, inventors, and so on. You can see fairly clearly here where the tensions lie.

    For a long time I’ve mainly indulged the former, while proselytising-but-not-really-acting-on the latter. I’ve done the best I can to try smaller, indie folx as much as possible, but the juggernaut of platform capitalism is a shrewd and insidious demon; one that is very, very difficult to exorcise.

    This year has been a period of learning and attempting to reorient and re-prioritise. The first big move was this site, which I desperately wanted to take off WordPress’s hosting. Having found a pretty good hosting deal elsewhere, it was only a few weeks of mucking about to transfer everything over.

    It’s ironic, in a way, that one of the first things I did after migrating the site was to install WordPress as a front-end system to keep everything running1. I did give less corporate-affiliated, more indie and ethical alternatives a look and a try, but it was either too tricky at the time to convert the existing archive, or they just weren’t particularly intuitive to me. As at the time of this writing, I’ve been working with the WordPress platform personally and professionally for well over a decade: it’s hard to pull up roots from that foundation.

    A few weeks ago I was looking at my budget spreadsheet; I’m not necessarily pinching pennies or anything at the moment, but after spending most of my life not having any kind of financial system or oversight or instinct at all, this simple spreadsheet is nothing short of a miracle. I was tinkering with expense categories and absently flicked to app subscriptions, and was fairly shocked at the total I saw. This category includes pro/premium subscriptions for apps like Todoist and Fantastical, but also many others that I’ve accumulated, particularly in the last year or two as I’ve really built up my work and personal workflows and systems. Now this work is important, and as noted earlier I do love playing around with new apps, toys, and so on. But when you see an annual/monthly/fortnightly total like that where it’s not necessarily an ‘essential’ purchase, it can pull you up short.

    When I was re-jigging my old Raspberry Pi earlier in the year (possibly worth re-visiting that in a future post), I was keen to try and set it up as its own little server, running a bunch of little apps that might serve as a private, personal organisation/admin hub. Self-hosting is an awesome idea in theory and principle, but in practice, without a fairly hefty amount of sysadmin knowledge, it can be tricky. But emboldened by the desire to save some cash, I waded back into that world once more; not necessarily to set up a private server, but at least to load up some self-hosted alternatives to the larger expenses.

    I went in a little more prepared this time, doing some reading, watching a few videos, getting my head around things like package managers, Docker and its containers, Homebrew, and even basic command line usage. Some of the apps I tried were intriguing, some were intuitive and well-designed, others were a little more wireframe-like, but still generally performed their tasks pretty well. After trying maybe a dozen self-hosted apps, though, I’m still using only one, and in most of the other cases, I’ve retained my subscriptions to the apps I was using before.

    As with WordPress, it’s hard to shift to something new. But it’s particularly hard when much of your ‘system’ has been chugging along effectively for several months, even years. My own system is far from perfect. Many of the parts of the system talk to each other, sometimes seamlessly via a widget or integration, other times via some kind of jerry-rigged or brute force solution. But many of the parts don’t interact. It’s clean and pleasing sometimes; other times it’s messy and frustrating. But after fumbling around in the dark for many years, trying all sorts of different methods, apps, systems, modes, on- and off-line configurations, it basically comes down to the satisfaction of having a system that I constructed myself that works for me. That satisfaction is what makes it hard to tweak the way things work at the moment.

    Experiments are important, though, and through the various little adventures I’ve had this year—from tinkering with old PCs, Macs, and Pis, to starting to consolidate and catalogue my not-insignificant digital media collection, to trying out a few indie/self-hosted options—I’ve started to wade into a whole other ecosystem of hardware, software, workflows, philosophies, methods, and techniques. This feels like somewhere I can be curious, can learn, can experiment, can fail, can build and create, and find pathways to a system slightly less dependent on tech megaliths: something ethical, sustainable, adaptable, friendly, and fun.


    1. WordPress is both a corporation and a (supposedly) non-profit organisation. They’re usually differentiated via their URL suffixes, i.e. WP.com is the corp, WP.org is the nonprofit. WP.org offers their CMS tool open-source, so anyone can install on their web server regardless of host. That’s what I did for this site when I shifted. ↩︎