The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: AI

  • Give me your answer, do

    By Ravi Kant on Pexels, 13 Mar 2018.

    For better or worse, I’m getting a bit of a reputation as ‘the AI guy’ in my immediate institutional sub-area. Depending on how charitable you’re feeling, this could be seen as very generous or wildly unfounded. I am not in any way a computer scientist or expert on emergent consciousness, synthetic cognition, language models, media generators, or even prompt engineering. I remain the same old film and media teacher and researcher I’ve always been. But I have always used fairly advanced technology as part of anything creative. My earliest memories are of typing up, decorating, and printing off books or banners or posters from my Dad’s old IBM computer. From there it was using PC laptops and desktops, and programs like Publisher or WordPerfect, 3D Movie Maker and Fine Artist, and then more media-specific tools at uni, like Final Cut and Pro Tools.

    Working constantly with computers, software, and apps, automatically turns you into something of a problem-solver—the hilarious ‘joke’ of media education is that the teachers have to be only slightly quicker than their students at Googling a solution. As well as problem-solving, I am predisposed to ‘daisy-chaining’. My introduction to the term was as a means of connecting multiple devices together—on Mac systems circa 2007-2017 this was fairly standard practice thanks to the inter-connectivity of Firewire cables and ports (though I’m informed that this is still common even through USB). Reflecting back on years of software and tool usage, though, I can see how I was daisy-chaining constantly. Ripping from CD or DVD, or capturing from tape, then converting to a useable format in one program, then importing the export to another program, editing or adjusting, exporting once again, then burning or converting et cetera et cetera. Even not that long ago, there weren’t exactly ‘one-stop’ solutions to media, in the same way that you might see an app like CapCut or Instagram in that way now.

    There’s also a kind of ethos to daisy-chaining. In shifting from one app, program, platform, or system, to another, you’re learning different ways of doing things, adapting your workflows each time, even if only subtly. Each interface presents you with new or different options, so you can apply a unique combination of visual, aural, and affective layers to your work. There’s also an ethos of independence: you are not locked in to one app’s way of doing things. You are adaptable, changeable, and you cherry-pick the best of what a variety of tools has to offer in order to make your work the best it can be. This is the platform economics argument, or the political platform economics argument, or some variant on all of this. Like everyone, I’ve spent many hours whinging about the time it took to make stuff or to get stuff done, wishing there was the ‘perfect app’ that would just do it all. But over time I’ve come to love my bundle of tools—the set I download/install first whenever I get a new machine (or have to wipe an old one); my (vomits) ‘stack’.

    * * * * *

    The above philosophy is what I’ve found myself doing with AI tools. I suppose out of all of them, I use Claude the most. I’ve found it the most straightforward in terms of setting up custom workspaces (what Claude calls ‘Projects’ and what ChatGPT calls ‘Custom GPTs’), and just generally really like the character and flavour of responses I get back. I like that it’s a little wordy, a little more academic, a little more florid, because that’s how I write and speak; though I suppose the outputs are not just encoded into the model, but also a mirror of how I’ve engaged with it. Right now in Claude I have a handful of projects set up:

    • Executive Assistant: Helps me manage my time, prioritise tasks, and keep me on track with work and creative projects. I’ve given it summaries of all my projects and commitments, so it can offer informed suggestions where necessary.
    • Research Assistant: I’ve given this most of my research outputs, as well as a curated selection of research notes, ideas, reference summaries, sometimes whole source texts. This project is where I’ll brainstorm research or teaching ideas, fleshing out concepts, building courses, etc
    • Creative Partner: This remains semi-experimental, because I actually don’t find AI that useful in this particular instance. However, this project has been trained on a couple of my produced media works, as well as a handful of creative ideas. I find the responses far too long to be useful, and often very tangential to what I’m actually trying to get out of it—but this is as much a project context and prompting problem as it is anything else.
    • 2 x Course Assistants: Two projects have been trained with all the materials related to the courses I’m running in the upcoming semester. These projects are used to brainstorm course structures, lesson plans, and even lecture outlines.
    • Systems Assistant: This is a little different to the Executive/Research Assistants, in that it is specifically set up around ‘systems’, so the various tools, methods, workflows that I use for any given task. It’s also a kind of ‘life admin’ helper in the sense of managing information, documents, knowledge, and so on. Now that I think of it, ‘Daisy’ would probably be a great name for this project—but then again

    I will often bounce ideas, prompts, notes between all of these different projects. How much this process corrupts the ‘purity’ of each individual project is not particularly clear to me, though I figure if it’s done in an individual chat instance it’s probably not that much of an issue. If I want to make something part of a given project’s ongoing working ‘knowledge’, I’ll put a summary somewhere in its context documents.

    But Claude is just one of the AI tools I use. I also have a bunch of language models on a hard drive that is always connected to my computer; I use these through the app GPT4All. These have similar functionality to Claude, ChatGPT, or any other proprietary/corporate LLM chatbot. Apart from the upper limit on their context windows, they have no usage limits; they run offline, privately, and at no cost. Their efficacy, though, is mixed. Llama and its variants are usually pretty reliable—though this is a Meta-built model, so there’s an accompanying ‘ick’ whenever I use it. Falcon, Hermes, and OpenOrca are independently developed, though these have taken quite some getting used to—I’ve also found that cloning them and training them on specific documents and with unique context prompts is the best way to use them.

    With all of these tools, I frequently jump between them, testing the same prompt across multiple models, or asking one model to generate prompts for another. This is a system of usage that may seem confusing at first glance, but is actually quite fluid. The outputs I get are interesting, diverse, and useful, rather than all being of the same ‘flavour’. Getting three different summaries of the same article, for example, lets me see what different models privilege in their ‘reading’—and then I’ll know which tool to use to target that aspect next time. Using AI in this way is still time-intensive, but I’ve found it much less laborious than repeatedly hammering at a prompt in a single tool trying to get the right thing. It’s also much more enjoyable, and feels more ‘human’, in the sense that you’re bouncing around between different helpers, all of whom have different strengths. The fail-rate is thus significantly lowered.

    Returning to ethos, using AI in this way feels more authentic. You learn more quickly how each tool functions, and what they’re best at. Jumping to different tools feels less like a context switch—as it might between software—and more like asking a different co-worker to weigh in. As someone who processes things through dialogue—be that with myself, with a journal, or with a friend or family member—this is a surprisingly natural way of working, of learning, and of creating. I may not be ‘the AI guy’ from a technical or qualifications standpoint, but I feel like I’m starting to earn the moniker at least from a practical, runs on the board perspective.

  • On Procreate and AI

    Made by me in, of course, Procreate (27 Aug 2024).

    The team behind the powerful and popular iPad app Procreate have been across tech news in recent weeks, spruiking their anti-AI position. “AI is not our future” spans the screen of a special AI page on their website, followed by: “Creativity is made, not generated.”

    It’s a bold position. Adobe has been slowly rolling out AI-driven systems in their suite of apps, to mixed reactions. Tablet maker Wacom was slammed earlier this year for using AI-generated assets in their marketing. And after pocketing AU $47 million in investor funding in December 2023, Aussie AI generation platform Leonardo.Ai was snapped up by fellow local giant Canva in July for just over AU $120 million.

    Artist and user reactions to Procreate’s position have been near-universal praise. Procreate has grown steadily over the last decade, emerging as a cornerstone iPad native art app, and only recently evolving towards desktop offerings. Their one-time purchase fee, in direct response to ongoing subscriptions from competitors like Adobe, makes it a tempting choice for creatives.

    Tech commentators might say that this is an example of companies choosing sides in the AI ‘war’. But this is, of course, a reductive view of both technology and industries. For mid-size companies like Procreate, it’s not necessarily a case of ‘get on board or get left behind’. They know their audience, as evidenced by the response to their position on AI: “Now this is integrity,” wrote developer and creative Sebastiaan de With.

    Consumers are smarter than anyone cares to consider. If they want to try shiny new toys, they will; if they don’t, they won’t. And in today’s creative environment, where there are so many tools, workflows, and options to choose from, maybe they don’t have to pick one approach over another.

    Huge tech companies control the conversation around education, culture, and the future of society. That’s a massive problem, because leave your Metas, Alphabets, and OpenAIs to the side, and you find creative, subversive, independent, anarchic, inspiring innovation happening all over the place. Some of these folx are using AI, and some aren’t: the work itself is interesting, rather than the exact tools or apps being used.

    Companies ignore technological advancement at their peril. But deliberately opting out? Maybe that’s just good business.

  • Elusive images

    Generated with Leonardo.Ai, prompts by me.

    Up until this year, AI-generated video was something of a white whale for tech developers. Early experiments resulted in janky-looking acid dream GIFs; vaguely recognisable frames and figures, but nothing in terms of consistent, logical motion. Then things started to get a little, or rather a lot, better. Through constant experimentation and development, the nerds (and I use this term in a nice way) managed to get the machines (and I use this term in a knowingly reductive way) to produce little videos that could have been clips from a film or a human-made animation. To reduce thousands of hours of math and programming into a pithy quotable, the key was this: they encoded time.

    RunwayML and Leonardo.Ai are probably the current forerunners in the space, allowing text-to-image-to-(short)video as a seamless user-driven process. RunwayML also offers text-to-audio generation, which you can then use to generate an animated avatar speaking those words; this avatar can be yourself, another real human, a generated image, or something else entirely. There’s also Pika, Genmo and many others offering variations on this theme.

    Earlier this year, OpenAI announced Sora, their video generation tool. One assumes this will be built into ChatGPT, the chatbot which is serving as the interface for other OpenAI products like DALL-E and custom GPTs. The published results of Sora are pretty staggering, though it’s an open secret that these samples were chosen from many not-so-great results. Critics have also noted that even the supposed exemplars have their flaws. Similar things were said about image generators only a few years ago, though, so one assumes that the current state of things is the worst it will ever be.

    Creators are now experimenting with AI films. The aforementioned RunwayML is currently running their second AI Film Festival in New York. Many AI films are little better than abstract pieces that lack the dynamism and consideration to be called even avant-garde. However, there are a handful that manage to transcend their technical origins. But how this is not true of all media, all art, manages to elude critics and commentators, and worst of all, my fellow scholars.

    It is currently possible, of course, to use AI tools to generate most components, and even to compile found footage into a complete video. But this is an unreliable method that offers little of the creative control that filmmakers might wish for. Creators employ an infinite variety of different tools, workflows, and methods. The simplest might prompt ChatGPT with an idea, ask for a fleshed-out treatment, and then use other tools to generate or source audiovisual material that the user then edits in software like Resolve, Final Cut or Premiere. Others build on this post-production workflow by generating music with Suno or Udio; or they might compose music themselves and have it played by an AI band or orchestra.

    As with everything, though, the tools don’t matter. If the finished product doesn’t have a coherent narrative, theme, or idea, it remains a muddle of modes and outputs that offers nothing to the viewer. ChatGPT may generate some poetic ideas on a theme for you, but you still have to do the cognitive work of fleshing that out, sourcing your media, arranging that media (or guiding a tool to do it for you). Depending on what you cede to the machine, you may or may not be happy with the result — cue more refining, revisiting, more processing, more thinking.

    AI can probably replace us humans for low-stakes media-making, sure. Copywriting, social media ads and posts, the nebulous corporate guff that comprises most of the dead internet. For AI video, the missing component of the formula was time. But for AI film, time-based AI media of any meaning or consequence, encoding time was just the beginning.

    AI media won’t last as a genre or format. Call that wild speculation if you like, but I’m pretty confident in stating it. AI media isn’t a fad, though, I think, in the same ways that blockchain and NFTs were. AI media is showing itself to be a capable content creator and creative collaborator; events like the AI Film Festival are how these tools test and prove themselves in this regard. To choose a handy analogue, the original ‘film’ — celluloid exposed to light to capture an image — still exists. But that format is distinct from film as a form. It’s distinct from film as a cultural idea. From film as a meme or filter. Film, somehow, remains a complex cultural assemblage of technical, social, material and cultural phenomena. Following that historical logic, I don’t think AI media will last in its current technical or cultural form. That’s not to say we shouldn’t be on it right now: quite the opposite, in fact. But to do that, don’t look to the past, or to textbooks, or even to people like me, to be honest. Look to the true creators: the tinkerers, the experimenters, what Apple might once have called the crazy ones.

    Creators and artists have always pushed the boundaries, have always guessed at what matters and what doesn’t, have always shared those guesses with the rest of us. Invariably, those guesses miss some of the mark, but taken collectively they give a good sense of a probable direction. That instinct to take wild stabs is something that LLMs, even a General Artificial Intelligence, will never be truly capable of. Similarly, the complexity of something like, for instance, a novel, or a feature film, eludes these technologies. The ways the tools become embedded, the ways the tools are treated or rejected, the ways they become social or cultural; that’s not for AI tools to do. That’s on us. Anyway, right now AI media is obsessed with its own nature and role in the world; it’s little better than a sequel to 2001: A Space Odyssey or Her. But like those films and countless other media objects, it does itself show us some of the ways we might either lean in to the change, or purposefully resist it. Any thoughts here on your own uses are very welcome!

    The creative and scientific methods blend in a fascinating way with AI media. Developers build tools that do a handful of things; users then learn to daisy-chain those tools together in personal workflows that suit their ideas and processes. To be truly innovative, creators will develop bold and strong original ideas (themes, stories, experiences), and then leverage their workflows to produce those ideas. It’s not just AI media. It’s AI media folded into everything else we already do, use, produce. That’s where the rubber meets the road, so to speak; where a tool or technique becomes the culture. That’s how it worked with printing and publishing, cinema and TV, computers, the internet, and that’s how it will work with AI. That’s where we’re headed. It’s not the singularity. It’s not the end of the world. it’s far more boring and fascinating than either of those could ever hope to be.

  • Blinded by machine visions

    A grainy, indistinct black and white image of a human figure wearing a suit and tie. The bright photo grain covers his eyes like a blindfold.
    Generated with Adobe Firefly, prompts by me.

    I threw around a quick response to this article on the socials this morning and, in particular, some of the reactions I was seeing. Here’s the money quote from photographer Annie Leibovitz, when asked about the effects of AI tools, generative AI technology, etc, on photography:

    “That doesn’t worry me at all,” she told AFP. “With each technological progress, there are hesitations and concerns. You just have to take the plunge and learn how to use it.”1

    The paraphrased quotes continue on the following lines:

    She says AI-generated images are no less authentic than photography.

    “Photography itself is not really real… I like to use PhotoShop. I use all the tools available.”

    Even deciding how to frame a shot implies “editing and control on some level,” she added.2

    A great many folx were posting responses akin to ‘Annie doesn’t count because she’s in the 1%’ or ‘she doesn’t count because she’s successful’, ‘she doesn’t have to worry anymore’ etc etc.

    On the one hand it’s typical reactionary stuff with which the socials are often ablaze. On the other hand, it’s fair to fear the impact of a given innovation on your livelihood or your passion.

    As I hint in my own posts3, though, I think the temptation to leap on this as privilege is premature, and a little symptomatic of whatever The Culture and/or The Discourse is at the moment, and has been for the duration of the platformed web, if not much longer.

    Leibovitz is and has always been a jobbing artist. Sure, in later years she has been able to pick and choose a little more, but by all accounts she is a busy and determined professional, treating every job with just as much time, effort, dedication as she always has. The work, for Leibovitz, has value, just as much — if not more — than the product or the paycheck.

    I don’t mean to suddenly act my age, or appear much older and grumpier than I am, but I do wonder about how much time aspiring or current photographers spend online discussing and/or worrying and/or reacting to the latest update or the current fad-of-the-moment. I 100% understand the need for today’s artists and creators to engage in some way with the social web, if only to put their names out there to try and secure work. But if you’re living in the comments, whipping yourselves and others into a frenzy about AI or whatever it is, is that really the best use of your time?

    The irony of me asking such questions on a blog where I do nothing but post and react is not lost on me, but this blog for me is a scratchpad, a testing ground, a commonplace book; it’s a core part of my ‘process’, whatever that is, and whatever it’s for. This is practice for other writing, for future writing, for my identity, career, creative endeavours as a writer. It’s a safe space; I’m not getting angry (necessarily), or seeking out things to be angry about.

    But I digress. Leibovitz is not scared of AI. And as someone currently working in this space, I can’t disagree. Having even a rudimentary understanding of what these tools are actually doing will dispel some of the fear.

    Further, photography, like the cinema that it birthed, has already died a thousand deaths, and will die a thousand more.

    Brilliant4 photography lecturer and scholar Alison Bennett speaks to the legacy and persistence of photographic practice here:

    “Recent examples [of pivotal moments of change in photography] include the transition from analogue film to digital media in the late 20th century, then the introduction of the internet-connected smart phone from 2007,” they said.

    “These changes fundamentally redefined what was possible and how photography was used.

    “The AI tipping point is just another example of how photography is constantly being redefined.”5

    As ever, the tools are not the problem. The real enemies are the companies and people that are driving the tools into the mainstream at scale. The companies that train their models on unlicensed datasets, drawn from copyrighted material. The people that buy into their own bullshit about AI and AGI being some kind of evolutionary and/or quasi-biblical moment.

    For every post shitting on Annie Leibovitz, you must have at least twenty posts actively shitting on OpenAI and their ilk, pushing for ethically-sourced and maintained datasets, pushing for systemic change to the resource management of AI systems, including sustainable data centers.

    The larger conceptual questions are around authenticity and around hard work. If you use AI tools, are you still an authentic artist? Aren’t AI tools just a shortcut? Of course, the answers are ‘not necessarily’. If you’ve still done the hard yards to learn about your craft, to learn about how you work, to discover what kinds of stories and experiences you want to create, to find your voice, in whatever form it takes, then generative AI is a paintbrush. A weird-looking paintbrush, but a paintbrush nevertheless (or plasticine, or canvas, or glitter, or an app, etc. etc. ad infinitum).

    Do the work, and you too can be either as ambivalent as Leibovitz, or as surprised and delighted as you want to be. Either way, you’re still in control.

    Notes ↩︎

    1. Agence France-Presse 2024, ‘Photographer Annie Leibovitz: “AI doesn’t worry me at all”’, France 24, viewed 26 March 2024, <https://www.france24.com/en/live-news/20240320-photographer-annie-leibovitz-ai-doesn-t-worry-me-at-all>.
      ↩︎
    2. ibid. ↩︎
    3. See here, and with tiny edits for platform affordances here and here. What’s the opposite of POSSE? PEPOS? ↩︎
    4. I am somewhat biased as, at the time of writing, Dr. Bennett and I currently share a place of work. To look through their expanded (heh) works, go here. ↩︎
    5. Odell, T 2024, ‘New exhibition explores AI’s influence on the future of photography’, RMIT University, viewed 26 March 2024, <https://www.rmit.edu.au/news/all-news/2024/mar/photo-2024>.
      ↩︎

Her language contains elements from Aeolic vernacular and poetic tradition, with traces of epic vocabulary familiar to readers of Homer. She has the ability to judge critically her own ecstasies and grief, and her emotions lose nothing of their force by being recollected in tranquillity.

Marble statue of Sappho on side profile.

Designed with WordPress