The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: academia

  • Re-Wilding AI

    Here’s a recorded version of a workshop I first delivered at the Artificial Visionaries symposium at the University of Queensland in November 2024. I’ve used chunks/versions of it since in my teaching and parts of my research and practice.

  • New research published: The Allure of Artificial Worlds

    ‘Vapourwave Hall’, generated by me using Leonardo.Ai.

    This is a little late, as the article was actually released back in November, but due to swearing off work for a month over December and into the new year, I thought I’d hold off on posting here.

    This piece, ‘The Allure of Artificial Worlds‘, is my first small contribution to AI research — specifically, I look here at how the visions conjured by image and video generators might be considered their own kinds of worlds. There is a nod here, as well, to ‘simulative AI’, also known as agentic AI, which many feel may be the successor to generative AI tools operating singularly. We’ll see.


    Abstract

    With generative AI (genAI) and its outputs, visual and aural cultures are grappling with new practices in storytelling, artistic expression, and meme-farming. Some artists and commentators sit firmly on the critical side of the discourse, citing valid concerns around utility, longevity, and ethics. But more spurious judgements abound, particularly when it comes to quality and artistic value.

    This article presents and explores AI-generated audiovisual media and AI-driven simulative systems as worlds: virtual technocultural composites, assemblages of material and meaning. In doing so, this piece seeks to consider how new genAI expressions and applications challenge traditional notions of narrative, immersion, and reality. What ‘worlds’ do these synthetic media hint at or create? And by what processes of visualisation, mediation, and aisthesis do they operate on the viewer? This piece proposes that these AI worlds offer a glimpse of a future aesthetic, where the lines between authentic and artificial are blurred, and the human and the machinic are irrevocably enmeshed across society and culture. Where the uncanny is not the exception, but the rule.

  • Grotesque fascination

    A few weeks back, some colleagues and I were invited to share some new thoughts and ideas on the theme of ‘ecomedia’, as a lovely and unconventional way to launch Simon R. Troon’s newest monograph, Cinematic Encounters with Disaster: Realisms for the Anthropocene. Here’s what I presented; a few scattered scribblings on environmental imaginaries as mediated through AI.


    Grotesque Fascination:

    Reflections from my weekender in the uncanny valley

    In February 2024 OpenAI announced their video generation tool Sora. In the technical paper that accompanied this announcement, they referred to Sora as a ‘world simulator’. Not just Sora, but also DALL-E or Runway or Midjourney, all of these AI tools further blur and problematise the lines between the real and the virtual. Image and video generation tools re-purpose, re-contextualise, and re-gurgitate how humans perceive their environments and those around them. These tools offer a carnival mirror’s reflection on what we privilege, prioritise, and what we prejudice against in our collective imaginations. In particular today, I want to talk a little bit about how generative AI tools might offer up new ways to relate to nature, and how they might also call into question the ways that we’ve visualized our environment to date.

    AI media generators work from datasets that comprise billions of images, as well as text captions, and sometimes video samples; the model maps all of this information using advanced mathematics in a hyper-dimensional space, sometimes called the latent space or a U-net. A random image of noise is then generated and fed through the model, along with a text prompt from the user. The model uses the text to gradually de-noise the image in a way that the model believes is appropriate to the given prompt.

    In these datasets, there are images of people, of animals, of built and natural environments, of objects and everyday items. These models can generate scenes of the natural world very convincingly. These generations remind me of the open virtual worlds in video games like Skyrim or Horizon: Zero Dawn: there is a real, visceral sense of connection for these worlds as you move through them. In a similar way, when you’re playing with tools like Leonardo or MidJourney, there can often be visceral, embodied reactions to the images or media that they generate: Shane Denson has written about this in terms of “sublime awe” and “abject cringe”. Like video games, too, AI Media Generators allow us to observe worlds that we may never see in person. Indeed, some of the landscapes we generate may be completely alien or biologically impossible, at least on this planet, opening up our eyes to different ecological possibilities or environmental arrangements. Visualising or imagining how ecosystems might develop is one way of potentially increasing awareness of those that are remote, unexplored or endangered; we may also be able to imagine how the real natural world might be impacted by our actions in the distant future. These alien visions might also, I suppose, prepare us for encountering different ecosystems and modes of life and biology on other worlds.

    But it’s worth considering, though, how this re-visualisation, virtualisation, re-constitution of environments, be they realistic or not, might change, evolve or hinder our collective mental image, or our capacity to imagine what constitutes ‘Nature’. This experience of generating ecosystems and environments may increase appreciation for our own very real, very tangible natural world and the impacts that we’re having on it, but like all imagined or technically-mediated processes there is always a risk of disconnecting people from that same very real, very tangible world around them. They may well prefer the illusion; they may prefer some kind of perfection, some kind of banal veneer that they can have no real engagement with or impact on. And it’s easy to ignore the staggering environmental impacts of the technology companies pushing these tools when you’re engrossed in an ecosystem of apps and not of animals.

    In previous work, I proposed the concept of virtual environmental attunement, a kind of hyper-awareness of nature that might be enabled or accelerated by virtual worlds or digital experiences. I’m now tempted to revisit that theory in terms of asking how AI tools problematise that possibility. Can we use these tools to materialise or make perceptible something that is intangible, virtual, immaterial? What do we gain or lose when we conceive or imagine, rather than encounter and experience?

    Machine vision puts into sharp relief the limitations of humanity’s perception of the world. But for me there remains a certain romance and beauty and intrigue — a grotesque fascination, if you like — to living in the uncanny valley at the moment, and it’s somewhere that I do want to stay a little bit longer. This is despite the omnipresent feeling of ickiness and uncertainty when playing with these tools, while the licensing of the datasets that they’re trained on remains unclear. For now, though, I’m trying to figure out how connecting with the machine-mind might give some shape or sensation to a broader feeling of dis-connection.

    How my own ideas and my capacity to imagine might be extended or supplemented by these tools, changing the way I relate to myself and the world around me.

  • Inertia

    Photo by Alexander Zvir, via Pexels.

    Since the interminable Melbourne lockdowns and their horrific effect on the population of the city, my place of work has implemented ‘slow-down’ periods. These are usually timed around the usual holiday periods, e.g. Christmas, Easter, but there’s usually also a slowdown scheduled around mid-semester and mid-year breaks. The idea isn’t exactly to stop work (in this economy? ahahahaha no, peasant.) but rather to skip or postpone any non-essential meetings and spend time on focused work. Most often for teacher-researchers like myself, this constitutes catching up on marking assignments or prepping for the coming weeks of classes, though sometimes we can scrape up some time to think about long-gestating research projects or creative work. That’s the theory, anyway.

    I will say it’s nice to pause meetings for a week or two. The nature of academic work is (and should be) collaborative, dependent on bouncing ideas off others, working together to solve gnarly pedagogical issues, pooling resources to compile rich and nuanced ciritical work. But if you’re balancing teaching or coordination along with administrative or managerial duties, plus postgraduate supervisions and research stuff, it can be a lot of being on, a lot of just… people work. I’ll throw in a quick disclaimer here that I’m very lucky to have a bunch of lovely colleagues, and the vast majority of my students have been almost saccharinely delightful to work with. It can still be a lot, though, if you’re pretty woeful at scheduling around your energy levels, as I often am. Hashtag high achiever, hashtag people pleaser, hashtag burnout, hashtag hashtag etc etc etc.

    Academics are notorious for keeping weird hours, or for working too much, or for not having any boundaries around work and life. And I say this as someone who has embodied that stereotype with aplomb for years (even pre-academia, to be honest). I’ve had many conversations with colleagues where we bemoan working late into the evening, or over the weekend, or around other commitments. I’ve often been hard-pressed to find anyone who has any hard boundaries around work and not-work.

    Taking extended leave last year was the first time I’ve ever properly stopped working. No sneaky finishing of research projects, no brainstorming the next media class, no cheeky research reading, no emails. It showed me many things, but primarily how insidious work can be for someone with my disposition and approach to life in general. It is also insidious when you are passionate, and when you care. I care deeply about media education and research, and have become familiar with its rhythms and contours, its stresses and its delights, its (many) foibles and much deeper issues. I care about students and ensuring they feel not just ‘delivered to’ or ‘spoken at’, but rather that they’re exposed to new ways of thinking; inspired to learn well beyond graduation, indeed, to never stop learning; enabled and empowered to tell their stories, and whatever stories they want to tell. I care about producing research, e.g. journal articles, video essays, presentations and events, that is not tired, stale, staid, boring, dense, conventional, but rather is experimental, vibrant, connected, open-ended, and appeals broadly across multiple disciplines and outside the academy.

    I’m not alone here. As mentioned above, I have colleagues who almost universally feel exactly the same way. And I’ve built a local and international research network who share these passions and questions and concerns. A global support group. I’m very lucky and privileged in this way.

    But yeah: all this shit is fucking exhausting. The environment, the sector, the period, certainly doesn’t help. The current model of academia, university management, tertiary education, the industry/academy nexus, capitalism (in summary: neoliberalism), all of it is quite happy to capitalise on passion, on modern productivity dicta around never-being-done, irons-in-the-fire, publish or perish, manage it all or die, no life for you, hang the consequences and anyone you’re dealing with who isn’t work (e.g. partners, kids, friends, families). To anyone who says academics have a cushy job and get paid too much: kindly take yourself into the sea, thanks. That may have been true in the past, but we’re living on the other side of whatever spectrum you’re looking at.

    Suffice to say, slowdowns are nice. Taking proper breaks and/or having an executive echelon that genuinely supports and structures wellbeing and balance would be ideal, but beggars can’t be choosers.

  • Shift Lock #3: A sales pitch for the tepid take

    After ‘abandoning’ the blog part of this site in early 2022, I embarked on a foolish newsletter endeavour called Shift Lock. It was fun and/or sustainable for a handful of posts, but then life got in the way. Over the next little while I’ll re-post those ruminations here for posterity. Errors and omissions my own. This instalment was published May 5, 2022 (see all Shift Lock posts here).


    Photo by Pixabay on Pexels.com

    Twitter was already a corporate entity, and had been struggling with how to market and position itself anyway. Not to mention, its free speech woes — irrevocably tied to those of its competitors — are not surprising. If anything, Mr. Musk was something of a golden ticket: someone to hand everything over to.

    The influx/exodus cycle started before the news was official… Muskovites joined/returned to Twitter in droves, opponents found scrolls bearing ancient Mastodon tutorials and set up their own mini-networks (let’s leave that irony steaming in the corner for now).

    None of this is new: businesses are bought and sold all the time, the right to free speech is never unconditional (and nor should it be), and the general populace move and shift and migrate betwixt different services, platforms, apps, and spaces all the time.

    What seems new, or at least different, about these latter media trends, issues, events, is the sheer volume of coverage they receive. What tends to happen with news from media industries (be they creative, social, or otherwise) is wall-to-wall coverage for a given week or two, before things peter out and we move on to the next block. It seems that online culture operates at two speeds: an instantaneous, rolling, roiling stream of chaos; and a broader, slightly slower rise and fall, where you can actually see trends come and go across a given time period. Taking the Oscars slap as an example: maybe that rise and fall lasts a week. Sometimes it might last two to four, as in the case of Musk and Twitter.

    How, then, do we consider or position these two speeds in broader ‘culture’?

    Like all of the aforementioned, Trump was not a new phenomenon. Populism was a tried and tested political strategy in 2015-16; just, admittedly, a strategy that many of us hoped had faded into obsolescence. However, true to the 20-30 year cycle of such things, Trump emerged. And while his wings were — mostly — clipped by the checks and balances of the over-complex American political system, the real legacy of his reign is our current post-truth moment. And that legacy is exemplified by a classic communications strategy: jamming. Jam the airwaves for a week, so everyone is talking about only one thing. Distract everyone from deeper issues that need work.

    This jamming doesn’t necessary come from politicians, from strategists, from agencies, as it may once have done. Rather, it comes from a conversational consensus emerging from platforms — and this consensus is most likely algorithmically-driven. That’s the real concern. And as much as Musk may want to open up the doors and release the code, it’s really not that straightforward.

    The algorithms behind social media platforms are complex — more than that, they are nested, like a kind of digital Rube Goldberg machine. People working on one section of the code are not aware nor comprehending of what other teams might be working on, beyond any do-not-disturb-type directives from on high. As scholar Nick Seaver says in a recent Washington Post piece, “The people inside Twitter want to understand how their algorithm works, too.” (Albergotti 2022)

    Algorithms — at least those employed by companies like Twitter — are built to stoke the fires of engagement. And there ain’t no gasoline like reactions, like outrage, like whatever the ‘big thing’ is for that particular week. These wildfires also intersect with the broader culture in ways that it takes longer-form criticism (I would say academic scholarship, but we often miss the mark, or more accurately, due to glacial peer review turnarounds, the boat) to meaningfully engage and understand.

    Thanks partly to COVID but also to general mental health stuff, I’ve been on a weird journey with social media (and news, to be fair) over the past 3-5 years. Occasional sabbaticals have certainly helped, but increasingly I’m just not checking it. This year I’ve found more and more writers and commentators whose long-form work I appreciate as a way of keeping across things, but also just for slightly more measured takes. Tepid takes. Not like a spa but more like a heated pool. This is partly why I started this newsletter-based journey, just to let myself think things through in a way that didn’t need to be posted immediately, but nor did I need to wait months/years for peer review. Somewhere beyond even the second trend-based speed I mentioned above.

    What it really lets me do, though, is disengage from the constant flow of algorithmically-driven media, opinion, reaction, and so on, in a way where I can still do that thinking in a relevant and appropriate way. What I’m hoping is that this kind of distance lets me turn around and observe that flow in new and interesting ways.


    Below the divider

    At the end of each post I link a few sites, posts, articles, videos that have piqued my interest of late. Some are connected to my research, some to teaching and other parts of academia, still others are… significantly less so (let’s keep some fun going, shall we?).


    Reed Albergotti (2022, 16 April), ‘Elon Musk wants Twitter’s algorithm to be public. It’s not that simple.’ Washington Post.