The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Year: 2024

  • De-platforming is hard

    Falling (detail), by me, 18 Nov 2024.

    I have two predilections that sometimes work hand in hand, and other times butt up against each other. The first is apps, tools, technology, all the shiny things; the second is a deep belief in supporting independent creators, developers, inventors, and so on. You can see fairly clearly here where the tensions lie.

    For a long time I’ve mainly indulged the former, while proselytising-but-not-really-acting-on the latter. I’ve done the best I can to try smaller, indie folx as much as possible, but the juggernaut of platform capitalism is a shrewd and insidious demon; one that is very, very difficult to exorcise.

    This year has been a period of learning and attempting to reorient and re-prioritise. The first big move was this site, which I desperately wanted to take off WordPress’s hosting. Having found a pretty good hosting deal elsewhere, it was only a few weeks of mucking about to transfer everything over.

    It’s ironic, in a way, that one of the first things I did after migrating the site was to install WordPress as a front-end system to keep everything running1. I did give less corporate-affiliated, more indie and ethical alternatives a look and a try, but it was either too tricky at the time to convert the existing archive, or they just weren’t particularly intuitive to me. As at the time of this writing, I’ve been working with the WordPress platform personally and professionally for well over a decade: it’s hard to pull up roots from that foundation.

    A few weeks ago I was looking at my budget spreadsheet; I’m not necessarily pinching pennies or anything at the moment, but after spending most of my life not having any kind of financial system or oversight or instinct at all, this simple spreadsheet is nothing short of a miracle. I was tinkering with expense categories and absently flicked to app subscriptions, and was fairly shocked at the total I saw. This category includes pro/premium subscriptions for apps like Todoist and Fantastical, but also many others that I’ve accumulated, particularly in the last year or two as I’ve really built up my work and personal workflows and systems. Now this work is important, and as noted earlier I do love playing around with new apps, toys, and so on. But when you see an annual/monthly/fortnightly total like that where it’s not necessarily an ‘essential’ purchase, it can pull you up short.

    When I was re-jigging my old Raspberry Pi earlier in the year (possibly worth re-visiting that in a future post), I was keen to try and set it up as its own little server, running a bunch of little apps that might serve as a private, personal organisation/admin hub. Self-hosting is an awesome idea in theory and principle, but in practice, without a fairly hefty amount of sysadmin knowledge, it can be tricky. But emboldened by the desire to save some cash, I waded back into that world once more; not necessarily to set up a private server, but at least to load up some self-hosted alternatives to the larger expenses.

    I went in a little more prepared this time, doing some reading, watching a few videos, getting my head around things like package managers, Docker and its containers, Homebrew, and even basic command line usage. Some of the apps I tried were intriguing, some were intuitive and well-designed, others were a little more wireframe-like, but still generally performed their tasks pretty well. After trying maybe a dozen self-hosted apps, though, I’m still using only one, and in most of the other cases, I’ve retained my subscriptions to the apps I was using before.

    As with WordPress, it’s hard to shift to something new. But it’s particularly hard when much of your ‘system’ has been chugging along effectively for several months, even years. My own system is far from perfect. Many of the parts of the system talk to each other, sometimes seamlessly via a widget or integration, other times via some kind of jerry-rigged or brute force solution. But many of the parts don’t interact. It’s clean and pleasing sometimes; other times it’s messy and frustrating. But after fumbling around in the dark for many years, trying all sorts of different methods, apps, systems, modes, on- and off-line configurations, it basically comes down to the satisfaction of having a system that I constructed myself that works for me. That satisfaction is what makes it hard to tweak the way things work at the moment.

    Experiments are important, though, and through the various little adventures I’ve had this year—from tinkering with old PCs, Macs, and Pis, to starting to consolidate and catalogue my not-insignificant digital media collection, to trying out a few indie/self-hosted options—I’ve started to wade into a whole other ecosystem of hardware, software, workflows, philosophies, methods, and techniques. This feels like somewhere I can be curious, can learn, can experiment, can fail, can build and create, and find pathways to a system slightly less dependent on tech megaliths: something ethical, sustainable, adaptable, friendly, and fun.


    1. WordPress is both a corporation and a (supposedly) non-profit organisation. They’re usually differentiated via their URL suffixes, i.e. WP.com is the corp, WP.org is the nonprofit. WP.org offers their CMS tool open-source, so anyone can install on their web server regardless of host. That’s what I did for this site when I shifted. ↩︎
  • Alternate Spaces

    Alternate Spaces © 2024 by Daniel Binns is licensed under CC BY-SA 4.0.

    See more AI weirdness here.

  • On Procreate and AI

    Made by me in, of course, Procreate (27 Aug 2024).

    The team behind the powerful and popular iPad app Procreate have been across tech news in recent weeks, spruiking their anti-AI position. “AI is not our future” spans the screen of a special AI page on their website, followed by: “Creativity is made, not generated.”

    It’s a bold position. Adobe has been slowly rolling out AI-driven systems in their suite of apps, to mixed reactions. Tablet maker Wacom was slammed earlier this year for using AI-generated assets in their marketing. And after pocketing AU $47 million in investor funding in December 2023, Aussie AI generation platform Leonardo.Ai was snapped up by fellow local giant Canva in July for just over AU $120 million.

    Artist and user reactions to Procreate’s position have been near-universal praise. Procreate has grown steadily over the last decade, emerging as a cornerstone iPad native art app, and only recently evolving towards desktop offerings. Their one-time purchase fee, in direct response to ongoing subscriptions from competitors like Adobe, makes it a tempting choice for creatives.

    Tech commentators might say that this is an example of companies choosing sides in the AI ‘war’. But this is, of course, a reductive view of both technology and industries. For mid-size companies like Procreate, it’s not necessarily a case of ‘get on board or get left behind’. They know their audience, as evidenced by the response to their position on AI: “Now this is integrity,” wrote developer and creative Sebastiaan de With.

    Consumers are smarter than anyone cares to consider. If they want to try shiny new toys, they will; if they don’t, they won’t. And in today’s creative environment, where there are so many tools, workflows, and options to choose from, maybe they don’t have to pick one approach over another.

    Huge tech companies control the conversation around education, culture, and the future of society. That’s a massive problem, because leave your Metas, Alphabets, and OpenAIs to the side, and you find creative, subversive, independent, anarchic, inspiring innovation happening all over the place. Some of these folx are using AI, and some aren’t: the work itself is interesting, rather than the exact tools or apps being used.

    Companies ignore technological advancement at their peril. But deliberately opting out? Maybe that’s just good business.

  • Grotesque fascination

    A few weeks back, some colleagues and I were invited to share some new thoughts and ideas on the theme of ‘ecomedia’, as a lovely and unconventional way to launch Simon R. Troon’s newest monograph, Cinematic Encounters with Disaster: Realisms for the Anthropocene. Here’s what I presented; a few scattered scribblings on environmental imaginaries as mediated through AI.


    Grotesque Fascination:

    Reflections from my weekender in the uncanny valley

    In February 2024 OpenAI announced their video generation tool Sora. In the technical paper that accompanied this announcement, they referred to Sora as a ‘world simulator’. Not just Sora, but also DALL-E or Runway or Midjourney, all of these AI tools further blur and problematise the lines between the real and the virtual. Image and video generation tools re-purpose, re-contextualise, and re-gurgitate how humans perceive their environments and those around them. These tools offer a carnival mirror’s reflection on what we privilege, prioritise, and what we prejudice against in our collective imaginations. In particular today, I want to talk a little bit about how generative AI tools might offer up new ways to relate to nature, and how they might also call into question the ways that we’ve visualized our environment to date.

    AI media generators work from datasets that comprise billions of images, as well as text captions, and sometimes video samples; the model maps all of this information using advanced mathematics in a hyper-dimensional space, sometimes called the latent space or a U-net. A random image of noise is then generated and fed through the model, along with a text prompt from the user. The model uses the text to gradually de-noise the image in a way that the model believes is appropriate to the given prompt.

    In these datasets, there are images of people, of animals, of built and natural environments, of objects and everyday items. These models can generate scenes of the natural world very convincingly. These generations remind me of the open virtual worlds in video games like Skyrim or Horizon: Zero Dawn: there is a real, visceral sense of connection for these worlds as you move through them. In a similar way, when you’re playing with tools like Leonardo or MidJourney, there can often be visceral, embodied reactions to the images or media that they generate: Shane Denson has written about this in terms of “sublime awe” and “abject cringe”. Like video games, too, AI Media Generators allow us to observe worlds that we may never see in person. Indeed, some of the landscapes we generate may be completely alien or biologically impossible, at least on this planet, opening up our eyes to different ecological possibilities or environmental arrangements. Visualising or imagining how ecosystems might develop is one way of potentially increasing awareness of those that are remote, unexplored or endangered; we may also be able to imagine how the real natural world might be impacted by our actions in the distant future. These alien visions might also, I suppose, prepare us for encountering different ecosystems and modes of life and biology on other worlds.

    But it’s worth considering, though, how this re-visualisation, virtualisation, re-constitution of environments, be they realistic or not, might change, evolve or hinder our collective mental image, or our capacity to imagine what constitutes ‘Nature’. This experience of generating ecosystems and environments may increase appreciation for our own very real, very tangible natural world and the impacts that we’re having on it, but like all imagined or technically-mediated processes there is always a risk of disconnecting people from that same very real, very tangible world around them. They may well prefer the illusion; they may prefer some kind of perfection, some kind of banal veneer that they can have no real engagement with or impact on. And it’s easy to ignore the staggering environmental impacts of the technology companies pushing these tools when you’re engrossed in an ecosystem of apps and not of animals.

    In previous work, I proposed the concept of virtual environmental attunement, a kind of hyper-awareness of nature that might be enabled or accelerated by virtual worlds or digital experiences. I’m now tempted to revisit that theory in terms of asking how AI tools problematise that possibility. Can we use these tools to materialise or make perceptible something that is intangible, virtual, immaterial? What do we gain or lose when we conceive or imagine, rather than encounter and experience?

    Machine vision puts into sharp relief the limitations of humanity’s perception of the world. But for me there remains a certain romance and beauty and intrigue — a grotesque fascination, if you like — to living in the uncanny valley at the moment, and it’s somewhere that I do want to stay a little bit longer. This is despite the omnipresent feeling of ickiness and uncertainty when playing with these tools, while the licensing of the datasets that they’re trained on remains unclear. For now, though, I’m trying to figure out how connecting with the machine-mind might give some shape or sensation to a broader feeling of dis-connection.

    How my own ideas and my capacity to imagine might be extended or supplemented by these tools, changing the way I relate to myself and the world around me.

  • Conjuring to a brief

    Generated by me with Leonardo.Ai.

    This semester I’m running a Media studio called ‘Augmenting Creativity’. The basic goal is to develop best practices for working with generative AI tools not just in creative workflows, but as part of university assignments, academic research, and in everyday routines. My motivation or philosophy for this studio is that so much attention is being focused on the outputs of tools like Midjourney and Leonardo.Ai (as well as outputs from textbots like ChatGPT); what I guess I’m interested in is exploring more precisely where in workflows, jobs, and daily life that these tools might actually be helpful.

    In class last week we held a Leonardo.Ai hackathon, inspired by one of the workshops that was run at the Re/Framing AI event I convened a month or so ago. Leonardo.Ai generously donated some credits for students to play around with the platform. Students were given a brief around what they should try to generate:

    • an AI Self-Portrait (using text only; no image guidance!)
    • three images to envision the studio as a whole (one conceptual, a poster, and a social media tile)
    • three square icons to represent one task in their daily workflow (home, work, or study-related)

    For the Hackathon proper, students were only able to adjust the text prompt and the Preset Style; all other controls had to remain unchanged, including the Model (Phoenix), Generation Mode (Fast), Prompt Enhance (off), and all others.

    Students were curious and excited, but also faced some challenges straight away with the underlying mechanics of image generators; they had to play around with word choice in prompts to get close to desired results. The biases and constraints of the Phoenix model quickly became apparent as the students tested its limitations. For some students this was more cosmetic, such as requesting that Leonardo.Ai generate a face with no jewelry or facial hair. This produced mixed results, in that sometimes explicitly negative prompts seemed to encourage the model to produce what wasn’t wanted. Other students encountered difficulties around race or gender presentation: the model struggles a lot with nuances in race, e.g. mixed-race or specific racial subsets, and also often depicts sexualised presentations of female-presenting people (male-presenting too, but much less frequently).

    This session last week proved a solid test of Leonardo.Ai’s utility and capacity in generating assets and content (we sent some general feedback to Leonardo.Ai on platform useability and potential for improvement), but also was useful for figuring out how and where the students might use the tool in their forthcoming creative projects.

    This week we’ve spent a little time on the status of AI imagery as art, some of the ethical considerations around generative AI, and where some of the supposed impacts of these tools may most keenly be felt. In class this morning, the students were challenged to deliver lightning talks on recent AI news, developing their presentation and media analysis skills. From here, we move a little more deeply into where creativity lies in the AI process, and how human/machine collaboration might produce innovative content. The best bit, as always, will be seeing where the students go with these ideas and concepts.

Her language contains elements from Aeolic vernacular and poetic tradition, with traces of epic vocabulary familiar to readers of Homer. She has the ability to judge critically her own ecstasies and grief, and her emotions lose nothing of their force by being recollected in tranquillity.

Marble statue of Sappho on side profile.

Designed with WordPress