The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: Technology

  • Give me your answer, do

    By Ravi Kant on Pexels, 13 Mar 2018.

    For better or worse, I’m getting a bit of a reputation as ‘the AI guy’ in my immediate institutional sub-area. Depending on how charitable you’re feeling, this could be seen as very generous or wildly unfounded. I am not in any way a computer scientist or expert on emergent consciousness, synthetic cognition, language models, media generators, or even prompt engineering. I remain the same old film and media teacher and researcher I’ve always been. But I have always used fairly advanced technology as part of anything creative. My earliest memories are of typing up, decorating, and printing off books or banners or posters from my Dad’s old IBM computer. From there it was using PC laptops and desktops, and programs like Publisher or WordPerfect, 3D Movie Maker and Fine Artist, and then more media-specific tools at uni, like Final Cut and Pro Tools.

    Working constantly with computers, software, and apps, automatically turns you into something of a problem-solver—the hilarious ‘joke’ of media education is that the teachers have to be only slightly quicker than their students at Googling a solution. As well as problem-solving, I am predisposed to ‘daisy-chaining’. My introduction to the term was as a means of connecting multiple devices together—on Mac systems circa 2007-2017 this was fairly standard practice thanks to the inter-connectivity of Firewire cables and ports (though I’m informed that this is still common even through USB). Reflecting back on years of software and tool usage, though, I can see how I was daisy-chaining constantly. Ripping from CD or DVD, or capturing from tape, then converting to a useable format in one program, then importing the export to another program, editing or adjusting, exporting once again, then burning or converting et cetera et cetera. Even not that long ago, there weren’t exactly ‘one-stop’ solutions to media, in the same way that you might see an app like CapCut or Instagram in that way now.

    There’s also a kind of ethos to daisy-chaining. In shifting from one app, program, platform, or system, to another, you’re learning different ways of doing things, adapting your workflows each time, even if only subtly. Each interface presents you with new or different options, so you can apply a unique combination of visual, aural, and affective layers to your work. There’s also an ethos of independence: you are not locked in to one app’s way of doing things. You are adaptable, changeable, and you cherry-pick the best of what a variety of tools has to offer in order to make your work the best it can be. This is the platform economics argument, or the political platform economics argument, or some variant on all of this. Like everyone, I’ve spent many hours whinging about the time it took to make stuff or to get stuff done, wishing there was the ‘perfect app’ that would just do it all. But over time I’ve come to love my bundle of tools—the set I download/install first whenever I get a new machine (or have to wipe an old one); my (vomits) ‘stack’.

    * * * * *

    The above philosophy is what I’ve found myself doing with AI tools. I suppose out of all of them, I use Claude the most. I’ve found it the most straightforward in terms of setting up custom workspaces (what Claude calls ‘Projects’ and what ChatGPT calls ‘Custom GPTs’), and just generally really like the character and flavour of responses I get back. I like that it’s a little wordy, a little more academic, a little more florid, because that’s how I write and speak; though I suppose the outputs are not just encoded into the model, but also a mirror of how I’ve engaged with it. Right now in Claude I have a handful of projects set up:

    • Executive Assistant: Helps me manage my time, prioritise tasks, and keep me on track with work and creative projects. I’ve given it summaries of all my projects and commitments, so it can offer informed suggestions where necessary.
    • Research Assistant: I’ve given this most of my research outputs, as well as a curated selection of research notes, ideas, reference summaries, sometimes whole source texts. This project is where I’ll brainstorm research or teaching ideas, fleshing out concepts, building courses, etc
    • Creative Partner: This remains semi-experimental, because I actually don’t find AI that useful in this particular instance. However, this project has been trained on a couple of my produced media works, as well as a handful of creative ideas. I find the responses far too long to be useful, and often very tangential to what I’m actually trying to get out of it—but this is as much a project context and prompting problem as it is anything else.
    • 2 x Course Assistants: Two projects have been trained with all the materials related to the courses I’m running in the upcoming semester. These projects are used to brainstorm course structures, lesson plans, and even lecture outlines.
    • Systems Assistant: This is a little different to the Executive/Research Assistants, in that it is specifically set up around ‘systems’, so the various tools, methods, workflows that I use for any given task. It’s also a kind of ‘life admin’ helper in the sense of managing information, documents, knowledge, and so on. Now that I think of it, ‘Daisy’ would probably be a great name for this project—but then again

    I will often bounce ideas, prompts, notes between all of these different projects. How much this process corrupts the ‘purity’ of each individual project is not particularly clear to me, though I figure if it’s done in an individual chat instance it’s probably not that much of an issue. If I want to make something part of a given project’s ongoing working ‘knowledge’, I’ll put a summary somewhere in its context documents.

    But Claude is just one of the AI tools I use. I also have a bunch of language models on a hard drive that is always connected to my computer; I use these through the app GPT4All. These have similar functionality to Claude, ChatGPT, or any other proprietary/corporate LLM chatbot. Apart from the upper limit on their context windows, they have no usage limits; they run offline, privately, and at no cost. Their efficacy, though, is mixed. Llama and its variants are usually pretty reliable—though this is a Meta-built model, so there’s an accompanying ‘ick’ whenever I use it. Falcon, Hermes, and OpenOrca are independently developed, though these have taken quite some getting used to—I’ve also found that cloning them and training them on specific documents and with unique context prompts is the best way to use them.

    With all of these tools, I frequently jump between them, testing the same prompt across multiple models, or asking one model to generate prompts for another. This is a system of usage that may seem confusing at first glance, but is actually quite fluid. The outputs I get are interesting, diverse, and useful, rather than all being of the same ‘flavour’. Getting three different summaries of the same article, for example, lets me see what different models privilege in their ‘reading’—and then I’ll know which tool to use to target that aspect next time. Using AI in this way is still time-intensive, but I’ve found it much less laborious than repeatedly hammering at a prompt in a single tool trying to get the right thing. It’s also much more enjoyable, and feels more ‘human’, in the sense that you’re bouncing around between different helpers, all of whom have different strengths. The fail-rate is thus significantly lowered.

    Returning to ethos, using AI in this way feels more authentic. You learn more quickly how each tool functions, and what they’re best at. Jumping to different tools feels less like a context switch—as it might between software—and more like asking a different co-worker to weigh in. As someone who processes things through dialogue—be that with myself, with a journal, or with a friend or family member—this is a surprisingly natural way of working, of learning, and of creating. I may not be ‘the AI guy’ from a technical or qualifications standpoint, but I feel like I’m starting to earn the moniker at least from a practical, runs on the board perspective.

  • On Procreate and AI

    Made by me in, of course, Procreate (27 Aug 2024).

    The team behind the powerful and popular iPad app Procreate have been across tech news in recent weeks, spruiking their anti-AI position. “AI is not our future” spans the screen of a special AI page on their website, followed by: “Creativity is made, not generated.”

    It’s a bold position. Adobe has been slowly rolling out AI-driven systems in their suite of apps, to mixed reactions. Tablet maker Wacom was slammed earlier this year for using AI-generated assets in their marketing. And after pocketing AU $47 million in investor funding in December 2023, Aussie AI generation platform Leonardo.Ai was snapped up by fellow local giant Canva in July for just over AU $120 million.

    Artist and user reactions to Procreate’s position have been near-universal praise. Procreate has grown steadily over the last decade, emerging as a cornerstone iPad native art app, and only recently evolving towards desktop offerings. Their one-time purchase fee, in direct response to ongoing subscriptions from competitors like Adobe, makes it a tempting choice for creatives.

    Tech commentators might say that this is an example of companies choosing sides in the AI ‘war’. But this is, of course, a reductive view of both technology and industries. For mid-size companies like Procreate, it’s not necessarily a case of ‘get on board or get left behind’. They know their audience, as evidenced by the response to their position on AI: “Now this is integrity,” wrote developer and creative Sebastiaan de With.

    Consumers are smarter than anyone cares to consider. If they want to try shiny new toys, they will; if they don’t, they won’t. And in today’s creative environment, where there are so many tools, workflows, and options to choose from, maybe they don’t have to pick one approach over another.

    Huge tech companies control the conversation around education, culture, and the future of society. That’s a massive problem, because leave your Metas, Alphabets, and OpenAIs to the side, and you find creative, subversive, independent, anarchic, inspiring innovation happening all over the place. Some of these folx are using AI, and some aren’t: the work itself is interesting, rather than the exact tools or apps being used.

    Companies ignore technological advancement at their peril. But deliberately opting out? Maybe that’s just good business.

  • Grotesque fascination

    A few weeks back, some colleagues and I were invited to share some new thoughts and ideas on the theme of ‘ecomedia’, as a lovely and unconventional way to launch Simon R. Troon’s newest monograph, Cinematic Encounters with Disaster: Realisms for the Anthropocene. Here’s what I presented; a few scattered scribblings on environmental imaginaries as mediated through AI.


    Grotesque Fascination:

    Reflections from my weekender in the uncanny valley

    In February 2024 OpenAI announced their video generation tool Sora. In the technical paper that accompanied this announcement, they referred to Sora as a ‘world simulator’. Not just Sora, but also DALL-E or Runway or Midjourney, all of these AI tools further blur and problematise the lines between the real and the virtual. Image and video generation tools re-purpose, re-contextualise, and re-gurgitate how humans perceive their environments and those around them. These tools offer a carnival mirror’s reflection on what we privilege, prioritise, and what we prejudice against in our collective imaginations. In particular today, I want to talk a little bit about how generative AI tools might offer up new ways to relate to nature, and how they might also call into question the ways that we’ve visualized our environment to date.

    AI media generators work from datasets that comprise billions of images, as well as text captions, and sometimes video samples; the model maps all of this information using advanced mathematics in a hyper-dimensional space, sometimes called the latent space or a U-net. A random image of noise is then generated and fed through the model, along with a text prompt from the user. The model uses the text to gradually de-noise the image in a way that the model believes is appropriate to the given prompt.

    In these datasets, there are images of people, of animals, of built and natural environments, of objects and everyday items. These models can generate scenes of the natural world very convincingly. These generations remind me of the open virtual worlds in video games like Skyrim or Horizon: Zero Dawn: there is a real, visceral sense of connection for these worlds as you move through them. In a similar way, when you’re playing with tools like Leonardo or MidJourney, there can often be visceral, embodied reactions to the images or media that they generate: Shane Denson has written about this in terms of “sublime awe” and “abject cringe”. Like video games, too, AI Media Generators allow us to observe worlds that we may never see in person. Indeed, some of the landscapes we generate may be completely alien or biologically impossible, at least on this planet, opening up our eyes to different ecological possibilities or environmental arrangements. Visualising or imagining how ecosystems might develop is one way of potentially increasing awareness of those that are remote, unexplored or endangered; we may also be able to imagine how the real natural world might be impacted by our actions in the distant future. These alien visions might also, I suppose, prepare us for encountering different ecosystems and modes of life and biology on other worlds.

    But it’s worth considering, though, how this re-visualisation, virtualisation, re-constitution of environments, be they realistic or not, might change, evolve or hinder our collective mental image, or our capacity to imagine what constitutes ‘Nature’. This experience of generating ecosystems and environments may increase appreciation for our own very real, very tangible natural world and the impacts that we’re having on it, but like all imagined or technically-mediated processes there is always a risk of disconnecting people from that same very real, very tangible world around them. They may well prefer the illusion; they may prefer some kind of perfection, some kind of banal veneer that they can have no real engagement with or impact on. And it’s easy to ignore the staggering environmental impacts of the technology companies pushing these tools when you’re engrossed in an ecosystem of apps and not of animals.

    In previous work, I proposed the concept of virtual environmental attunement, a kind of hyper-awareness of nature that might be enabled or accelerated by virtual worlds or digital experiences. I’m now tempted to revisit that theory in terms of asking how AI tools problematise that possibility. Can we use these tools to materialise or make perceptible something that is intangible, virtual, immaterial? What do we gain or lose when we conceive or imagine, rather than encounter and experience?

    Machine vision puts into sharp relief the limitations of humanity’s perception of the world. But for me there remains a certain romance and beauty and intrigue — a grotesque fascination, if you like — to living in the uncanny valley at the moment, and it’s somewhere that I do want to stay a little bit longer. This is despite the omnipresent feeling of ickiness and uncertainty when playing with these tools, while the licensing of the datasets that they’re trained on remains unclear. For now, though, I’m trying to figure out how connecting with the machine-mind might give some shape or sensation to a broader feeling of dis-connection.

    How my own ideas and my capacity to imagine might be extended or supplemented by these tools, changing the way I relate to myself and the world around me.

  • Conjuring to a brief

    Generated by me with Leonardo.Ai.

    This semester I’m running a Media studio called ‘Augmenting Creativity’. The basic goal is to develop best practices for working with generative AI tools not just in creative workflows, but as part of university assignments, academic research, and in everyday routines. My motivation or philosophy for this studio is that so much attention is being focused on the outputs of tools like Midjourney and Leonardo.Ai (as well as outputs from textbots like ChatGPT); what I guess I’m interested in is exploring more precisely where in workflows, jobs, and daily life that these tools might actually be helpful.

    In class last week we held a Leonardo.Ai hackathon, inspired by one of the workshops that was run at the Re/Framing AI event I convened a month or so ago. Leonardo.Ai generously donated some credits for students to play around with the platform. Students were given a brief around what they should try to generate:

    • an AI Self-Portrait (using text only; no image guidance!)
    • three images to envision the studio as a whole (one conceptual, a poster, and a social media tile)
    • three square icons to represent one task in their daily workflow (home, work, or study-related)

    For the Hackathon proper, students were only able to adjust the text prompt and the Preset Style; all other controls had to remain unchanged, including the Model (Phoenix), Generation Mode (Fast), Prompt Enhance (off), and all others.

    Students were curious and excited, but also faced some challenges straight away with the underlying mechanics of image generators; they had to play around with word choice in prompts to get close to desired results. The biases and constraints of the Phoenix model quickly became apparent as the students tested its limitations. For some students this was more cosmetic, such as requesting that Leonardo.Ai generate a face with no jewelry or facial hair. This produced mixed results, in that sometimes explicitly negative prompts seemed to encourage the model to produce what wasn’t wanted. Other students encountered difficulties around race or gender presentation: the model struggles a lot with nuances in race, e.g. mixed-race or specific racial subsets, and also often depicts sexualised presentations of female-presenting people (male-presenting too, but much less frequently).

    This session last week proved a solid test of Leonardo.Ai’s utility and capacity in generating assets and content (we sent some general feedback to Leonardo.Ai on platform useability and potential for improvement), but also was useful for figuring out how and where the students might use the tool in their forthcoming creative projects.

    This week we’ve spent a little time on the status of AI imagery as art, some of the ethical considerations around generative AI, and where some of the supposed impacts of these tools may most keenly be felt. In class this morning, the students were challenged to deliver lightning talks on recent AI news, developing their presentation and media analysis skills. From here, we move a little more deeply into where creativity lies in the AI process, and how human/machine collaboration might produce innovative content. The best bit, as always, will be seeing where the students go with these ideas and concepts.

  • Unknown Song By…

    A USB flash drive on a wooden surface.

    A week or two ago I went to help my Mum downsize before she moves house. As with any move, there was a lot of accumulated ‘stuff’ to go through; of course, this isn’t just manual labour of sorting and moving and removing, but also all the associated historical, emotional, material, psychological labour to go along with it. Plenty of old heirlooms and photos and treasures, but also a ton of junk.

    While the trip out there was partly to help out, it was also to claim anything I wanted, lest it accidentally end up passed off or chucked away. I ended up ‘inheriting’ a few bits and bobs, not least of which an old PC, which may necessitate a follow-up to my tinkering earlier this year.

    Among the treasures I claimed was an innocuous-looking black and red USB stick. On opening up the drive, I was presented with a bunch of folders, clearly some kind of music collection.

    While some — ‘Come Back Again’ and ‘Time Life Presents…’ — were obviously albums, others were filled with hundreds of files. Some sort of library/catalogue, perhaps. Most intriguing, though, not to mention intimidating, was that many of these files had no discernible name or metadata. Like zero. Blank. You’ve got a number for a title, duration, mono/stereo, and a sample rate. Most are MP3s, there are a handful of WAVs.

    Cross-checking dates and listening to a few of the mystery files, Mum and I figured out that this USB belonged to a late family friend. This friend worked for much of his life in radio; this USB was the ‘core’ of his library, presumably that he would take from station to station as he moved about the country.

    Like most media, music happens primarily online now, on platforms. For folx of my generation and older, it doesn’t seem that long ago that music was all physical, on cassettes, vinyl, CDs. But then, seemingly all of a sudden, music happened on the computer. We ripped all our CDs to burn our own, or to put them on an MP3 player or iPod, or to build up our libraries. We downloaded songs off LimeWire or KaZaA, then later torrented albums or even entire discographies.

    With physical media, the packaging is the metadata. Titles, track listings, personnel/crew, descriptions and durations adorn jewel cases, DVD covers, liner notes, and so on. Being thrust online as we were, we relied partly on the goodwill and labour of others — be they record labels or generous enthusiasts — to have entered metadata for CDs. On the not infrequent occasion where we encountered a CD without this info, we had to enter it ourselves.

    Wake up and smell the pixels. (source)

    This process ensured that you could look at the little screen on your MP3 player or iPod and see what the song was. If you were particularly fussy about such things (definitely not me) you would download album art to include, too; if you couldn’t find the album art, it’d be a picture of the artist, or of something else that represented the music to you.

    This labour set up a relationship between the music listener and their library; between the user and the file. The ways that software like iTunes or Winamp or Media Player would catalogue or sort your files (or not), and how your music would be presented in the interface; these things changed your relationship to your music.

    Despite the incredible privilege and access that apps like Spotify, Apple Music, Tidal, and the like, offer, we have these things at the expense of this user-file-library relationship. I’m not placing a judgement on this, necessarily, just noting how things have changed. Users and listeners will always find meaningful ways to engage with their media: the proliferation of hyper-specific playlists for each different mood or time of day or activity is an example of this. But what do we lose when we no longer control the metadata?

    On that USB I found, there are over 3500 music files. From a quick glance, I’d say about 75% have some kind of metadata attached, even if it’s just the artist and song title in the filename. Many of the rest, we know for certain, were directly digitised from vinyl, compact cassette, or spooled tape (for a reel-to-reel player). There is no automatic database search for these files. Dipping in and out, it will likely take me months to listen to the songs, note down enough lyrics for a search, then try to pin down which artist/version/album/recording I’m hearing. Many of these probably won’t exist on apps like Spotify, or even in dingy corners of YouTube.

    A detective mystery, for sure, but also a journey through music and media history: and one I’m very much looking forward to.