Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.
Image generated by Leonardo.Ai, 12 December 2025; prompt by me.
A few months ago I connected with Joaquin Melara, a US-based tech community builder. Joaquin has been very busy developing SWARM, a collective of fascinating folx from all walks of life, working towards the responsible adoption of AI technology. As well as some great events and seminars, SWARM also produces The AI Digest Podcast, and I was thrilled to be invited to join Joaquin to talk about my glitchy AI work.
Civility, care, and the ethics of critique in academia
Here are some (lightly edited, anonymous) highlights from some recent peer review reports I received on submissions to Q1 journals.
“a rather basic, limited and under-referenced overview” “I do not see how it contributes any original scholarship to the field” “The claim that [XYZ] is nonsense.”
… and these weren’t even from Reviewer 2!
Perhaps more distressingly, the following quote from an editor:
“The paper might be interesting but is not well prepared, and not technically accurate or insightful, as revealed in biting commentary from the best of two reviews”
The editor tries to be encouraging while also defending the same “biting commentary”:
“Authors may take advantage of these excellent and insightful review comments, and possibly compose a new paper for a possible future submission”
You may be thinking “Suck it up, snowflake.”
Sorry but no.
I’ve had harsh reviews before. I’ve written harsh reviews before. But you never call someone’s work ‘nonsense.’ You never call someone’s work ‘unoriginal’ or ‘basic’, even if you may think it. You certainly never do so without providing any suggestions as to how to redress these critiques, as these reviewers neglected to do.
I might take about half an hour to write a blog post. Maybe up to a day or so if it’s a bit longer, needs some referencing, editing or proofing etc. I don’t really care if people don’t read or don’t like this work. It’s mainly for myself. However, the articles that these comments received took between four and twelve months to write: you expect some level of engagement and at least basic common human courtesy in how responses are framed.
Reviewers: don’t be a dick.
Editors: shield contributors from harsh reviews.
Academia is intimidating and gatekept enough without this actual nonsense.
A few weeks ago I was invited to present some of my work at Caméra-Stylo, a fantastic conference run every two years by the Sydney Literature and Cinema Network.
For this presentation, I wanted to start to formalise the experimental approach I’d been employing around generative AI, and to give it some theoretical grounding. I wasn’t entirely surprised to find that only by looking back at my old notes on early film theory would I unearth the perfect words, terms, and ideas to, ahem, frame my work.
Here’s a recording of the talk:
Let me know what you think, and do contact me if you want to chat more or use some of this work yourself.
Here’s a recorded version of a workshop I first delivered at the Artificial Visionaries symposium at the University of Queensland in November 2024. I’ve used chunks/versions of it since in my teaching and parts of my research and practice.
‘Vapourwave Hall’, generated by me using Leonardo.Ai.
This is a little late, as the article was actually released back in November, but due to swearing off work for a month over December and into the new year, I thought I’d hold off on posting here.
This piece, ‘The Allure of Artificial Worlds‘, is my first small contribution to AI research — specifically, I look here at how the visions conjured by image and video generators might be considered their own kinds of worlds. There is a nod here, as well, to ‘simulative AI’, also known as agentic AI, which many feel may be the successor to generative AI tools operating singularly. We’ll see.
Abstract
With generative AI (genAI) and its outputs, visual and aural cultures are grappling with new practices in storytelling, artistic expression, and meme-farming. Some artists and commentators sit firmly on the critical side of the discourse, citing valid concerns around utility, longevity, and ethics. But more spurious judgements abound, particularly when it comes to quality and artistic value.
This article presents and explores AI-generated audiovisual media and AI-driven simulative systems as worlds: virtual technocultural composites, assemblages of material and meaning. In doing so, this piece seeks to consider how new genAI expressions and applications challenge traditional notions of narrative, immersion, and reality. What ‘worlds’ do these synthetic media hint at or create? And by what processes of visualisation, mediation, and aisthesis do they operate on the viewer? This piece proposes that these AI worlds offer a glimpse of a future aesthetic, where the lines between authentic and artificial are blurred, and the human and the machinic are irrevocably enmeshed across society and culture. Where the uncanny is not the exception, but the rule.