Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.
A few years back I bought a Raspberry Pi 3B+, with the intention of using it as a safe little sandbox for learning to code. I thought maybe I would buy up some components and make little robots or something, maybe a web server or the like. Who knows, one day I may still do all of these things (and/or continue learning Python, which I abandoned at about the Functions mark).
The Pi was a fun little thing to boot up every now and again when my primary computer became too slow/overwhelmed through lockdowns, or when I became overwhelmed by to-do’s, notifications, projects, etc, on my work machine. It really only has enough juice to run a web browser with one or two tabs, or LibreOffice Writer for basic word processing/drafting.
But I never really considered how the Pi might fit into my overall tech set-up, or whether it might actually be suitable as a regular machine at all.
I’ve always been intrigued by people returning to simpler modes of engaging with tech, particularly those in knowledge work where plenty of writing or focus time is required. Devices like the Freewrite, the AlphaSmart, the ReMarkable, all speak to a desire for writing with less bells and whistles, less , more focus and control over your ‘machinespace’, if not your actual space or environment.
Cue late last year and early this year, where I started thinking more seriously about writing more regularly, particularly for this here blog. Cue also the aforementioned death of the Mac, and desire to revive some old tech, and maybe the Pi is just the right (write?) minimalist tool for the job. With an internet connection and basic desktop functions it’s not exactly a ‘dumb’ device, but I figured it might be a nice restricted environment to get some words pumped out.
Booting it up again, there was the old OS, Raspbian, a basic standard desktop wallpaper, and a Documents folder festooned with abandoned coding practice files. I figured starting from scratch might be a good idea. I won’t bore you with the details, but suffice to say sorting out which version of the new Raspberry Pi OS would work best on an older model of Pi was… taxing. Between the Pi and the Macbook I do want to be able to use at least some of my main apps/tools etc, including Obsidian, but finding a version of such programs that are compatible with both older hardware and older systems is fairly painful.
Whenever I plug into ethernet, I feel like I’m going into lightspeed.
For now, I’m running 32-bit Raspberry Pi OS. There’s no Obsidian (that may have to remain on the work laptop/iOS devices depending on how the 2011 Macbook goes), but I’ve got a basic version of LibreOffice up and running for docs, presentations, spreadsheets. The process really inspired me to try and get back into Python, if only to build up a working knowledge of it over the rest of this year. While more complex projects may function better on one of the bigger machines, I can at least use the Pi as a dedicated coding tool for now. Depending on how it all goes, I may end up trying some of those robotics or server projects I was daydreaming about.
“Get outta my dreams; Get into my car…”
I’m running this bad boy with the top down. Do Pi people say ‘with the top down’? I don’t really care, to be honest. I just mean I took the lid off because the poor little thing got quite hot, what with being wiped and reloaded 3-4 times over the course of a few hours. For shits and gigs, I also love hooking the Pi up to my enormous 4K monitor; pretty remarkable that this tiny little box can project to a display so huge with decent resolution.
Once again, precisely how it fits into my workflows, processes, projects, let alone how it could remain semi-permanently in or on the physical workspace, remains to be seen. It was fun, though, to get it back to zero, to a place where I can answer some of those questions as I move forward.
Speaking of moving forward, the doorbell just rang; I think a solid state hard drive just arrived. Which means the Pi is done for now… next up, the Macbook…
I had planned this week to post something very different, but in light of the way my week has panned out, I’m feeling all of the following much more keenly, particularly in the wake of some of my rants about social media and platforms and such like.
My primary computer, for nearly ten years, was a 2014 Mac with Retina display. It was a beautiful beast, and served me very well, particularly through lockdowns when there were some issues getting an external monitor for my work laptop. Come early 2023, though, it was showing signs of wear and tear. I didn’t really want to fork out for a new machine when I had a perfectly good laptop from work, so I let it go out to pasture (the Apple Store).
After some time off work last year, though, I wanted to put some effort into separating personal files from my work stuff. Up until this point, I had used a cloud service for everything, without a backup (shock horror). I do realise that somehow physically separating out machines and hard drives for work and non-work is fairly redundant in this age of clouds, but doing the actual labour of downloading from the cloud service, then separating out folders onto hard drives, machines, then backing everything up appropriately, was not a little therapeutic.
Having carved out the workspace on the newer Macbook, I was left wondering what to do for a personal machine. There is always, obviously, the desire to rush out and drop a great deal of money on the latest model, but for various reasons, this is not currently a possibility for me. Aside from that, I’m surrounded by old tech, left in cupboards, not yet eBayed or traded in or taken away for recycling. I am aware and conscious enough of the horrific impact of e-waste, and with my recent interests in a smaller, more intimate, cosy, sustainable internet, I thought that maybe this would be a chance to put my money where my mouth is. If you can’t be with the sleek new tech you love, honey, love the slightly chunkier, dustier tech you’re with.
Plus, I’ve always loved the idea of tinkering with tech, even if I’ve never actually done anything like this properly. My plans aren’t unachievable nor overly ambitious; I have two computers, broadly defined, that I hope to revive and use in tandem as a kind of compound personal machine. The first of these is a Raspberry Pi 3B+, the second a mid-late 2011 model Macbook Pro: the last Macbook I owned that wasn’t a work device.
Come along with me, won’t you, on this journey of learning and self-discovery? Coming tomorrow (or at least in the coming days)… Part 2: Mmm, Pi.
If you like what’s going on here at The Clockwork Penguin, if you appreciate the cut of my particular jib, as it were, buy me a coffee!
Godzilla with the spicy lightning as depicted in Godzilla vs. Kong (2021).
The 2014 reboot/continuation/expansion of the Godzilla franchise opens with the standard mystery box. A helicopter flies low over a jungle landscape, there are low minor chords from a rumbling orchestra: dissonance, uncertainty, menace. Helicopter Passenger #1 turns to Helicopter Passenger #2: “They found something.”
This is the germ of what is now a multi-film franchise, with a spin-off TV series that debuted in late 2023. A few weeks ago, I re-watched Gareth Edwards’ 2014 reboot, as well as the sequel films I hadn’t seen, Godzilla II: King of the Monsters and Godzilla vs. Kong.
It was a bit of fun, obviously, a last hurrah before I went back to the very serious business of media academicking, but as is wont to happen, it’s been stewing ever since. So here: have some little thoughts on big monsters.
Last week I Masta’d1 up some speculations as to why Argylle has flopped. The first and most obvious reason that a film might tank is that it’s just not a very good film, as this may well be true of Argylle. But in a time where cinema is dead and buried and a media object is never discrete, we can’t look at the film in a vacuum.
I have thoughts on why #Argylle flopped. I haven’t seen it, so I won’t go into any great depth, but suffice to say there are two major components:
1) Marvel killed transmedia storytelling, jumped around on the corpse, drove a steamroller over it, then buried it in a nuclear waste facility.
2) Camp doesn’t hit like it used to. Big ensemble campy treats aren’t as sweet now; in an age of hyper-sensitivity, broad knowledge and information access, they taste a little sour. Ain’t no subtext anymore.2
The marketing machine behind Argylle decided they’d play a little game, by teasing a novel written by a mystery author (both in terms of them not being well-known, but also an author of actual mystery), with the film being quickly picked up for production. This was fairly clumsily-done, but leaving that to one side: okay, cool idea. The conceit is that the author runs into the real-life equivalent of one of their characters who whisks them away on an adventure. Cue ideas of unreliable narration, possible brainwashing, or whatever, and there’s the neat little package.
The concept overall is solid, but Universal and Apple made the mistake of thinking they could shoehorn this concept into a campaign that ‘tricked’ the audience into thinking some of it was factual, or at least had some tenuous crossover with reality.
Basically, they tried an old-school transmedia campaign.
Transmedia storytelling has always been around in some form or another. It dovetails quite nicely with epistolary and experimental narratives, like Mary Shelley’s original Frankenstein, and discussions of transmedia also work well when you’re thinking about serial stories or adaptations. The term is most often attributed to Henry Jenkins, a wonderful and kindly elder scholar who was thinking about the huge convergence of media technologies occasioned by the wide adoption of the internet in the late 1990s and early 2000s.
Jenkins’ ur-example of transmedia is The Matrix franchise, “a narrative so large that it cannot be contained within a single medium.”3 The idea is that in order to truly appreciate the narrative as a whole, the audience has to follow up on all the elements, be they films, video games, comic books, or whatever.
“Each franchise entry needs to be self-contained so you don’t need to have seen the film to enjoy the game, and vice versa. Any given product is a point of entry into the franchise as a whole. Reading across the media sustains a depth of experience that motivates more consumption.”4
This model is now far from unique in terms of marketing or storytelling; with the MCU, DC Universe, the Disneyfied Star Wars universe and others, we have no dearth of complex narratives and storyworlds to choose from. This is maybe now partly why transmedia is seen as, at best, a little dated, old hat, and at worst, a bit of a dirty word, when it comes to narrative, media, or cinema studies. Those still chipping away at the transmedia stoneface are seen as living in the past or worse, wasting their time. I don’t think it’s a waste of time, nor do I necessarily see it as living in the past; it’s just that transmedia is media now.
Every new media commodity, be it a film, an app, a game, a platform, novel, has a web of attendant media commodities spring up around it. Mostly these are used for marketing, but occasionally these extraneous texts may relay some plot point or narrative element. The issue is that you need to conceit to be front and centre, you need some idea of the full narrative; you can’t expect the audience to want to do anything. The producers of Argylle made this mistake. They did transmedia the old-fashioned way, where narrative elements are spread across discrete media objects (e.g. book and film), and they expected the audience to want to fill in the gaps, and to share their excitement at having done so… on social media, I guess?
But like transmedia storytelling, social media ain’t what she used to be. Our present internet is fragmented, hyper-platformed, paywalled; city-states competing for dominance, for annexation (acquisition), for citizens or slaves (subscribers). Content is still king, but the patrician’s coffers care not as to whether that content is produced by the finest scribes of the age, or the merchant guild’s new automatons.
Viral is still viral, popular is still popular, but the propagation of content moves differently. Hashtags, likes, views don’t mean much anymore. You want people talking, but you only care as much as it gets new people to join your platform or your service. Get new citizens inside the gates, then lock and bar the gates behind them; go back to the drawing board for the next big campaign. The long tail is no more; what matters is flash in the pan attention attacks.
The producers behind the Godzilla reboot clearly envisioned a franchise. This is clear enough from how the film ends (or more accurately, where it stops talking, and the credits are permitted to roll). Godzilla apparently saves the world (or at least Honolulu and Las Vegas) from another giant monster, then disappears into the sea, leaving humanity to speculate as to its motivations.
It’s also apparent that the filmmakers didn’t want a clean break from the cultural history, themes or value of the broader Godzilla oeuvre; the title sequence suggests that the 1954 Castle Bravo tests were actually an operation to destroy Godzilla5. And in the film’s prologue, this wonderful shot is presented with virtually no context.
American triumphalism meets Japanese… er, monster promotions?
What struck me most, though, is the lack of overt story-bridges, particularly in the first film. Story-bridges are parts of the plot, e.g. characters, events, images, that allow the audience to jump off to another part of the narrative. These jumping-off points can be explicit, e.g. an end-credits sequence, or a line of dialogue referring to a past/future event, or they can be implied, e.g. the introduction of a character in a minor role that may participate more prominently in other media.
As media franchises become more complex, these points/bridges are not as often modelled as connecting branches to nodes around a centred point (a tentpole film, for instance), but as a mesh that connects multiple, interconnected tentpole media. In some of my academic work, with my colleagues Vashanth Selvadurai and Peter Vistisen, we’ve explored how Marvel attempts to balance this complexity:
“[Marvel] carefully balances production by generating self-contained stories for the mass audience, which include moments of interconnectivity in some scenes to fortify the MCU and thereby accommodate the fan base… [T]he gradual scaling up in bridge complexity from characters to storyworld to a cohesive storyline is integrated into a polycentric universe of individual transmedia products. These elements are not gathered around one tentpole event that the audience has to experience before being able to make sense of the rest.”4
In Godzilla, the story-bridges are more thematic, even tonal. The story remains consistently about humanity’s desire to control the natural world, and that world’s steadfast resistance to control; multiple times we hear the scientist Ishiro Serizawa (Ken Watanabe) speak about ‘balance’ in nature, the idea of a natural equilibrium that humanity has upset, and that Godzilla and his kin will restore. There is also a balance within and between humanity; corporate greed, political power struggles, individual freedoms and restrictions, all vie to find a similar kind of equilibrium, if such a thing is possible. The resulting tone is one that feels universal but prescient; topical and relevant to the contemporary moment, despite the presence of enormous monsters.
This tone is carried over into Godzilla: King of the Monsters. The monsters multiply in this instalment, and an extraterrestrial element is introduced, but in general Godzilla’s animal motivation is about preservation of self, but also of Earth and its biology. I should also note that there are more explicit bridges in this film, like the characters of Dr. Serizawa and his colleague Dr. Vivienne Graham (Sally Hawkins). But the true connecting thread, at least for me, is this repeated theme of humans and our puny power struggles playing out against the backdrop of a deep time, a history, forces and powers so ancient that we can never really understand them.
This macro-bridge, if you like, allows the filmmakers to then make tweaks to the micro-elements of the story. If they want or need to adjust the character focus, they can. If the plot of a single film seems a little rote, maybe, or they want to try something different, they’ve given themselves enough space in the story and the story-world to do that. This may not necessarily be intentional, but it certainly appears as an effective counter-model to the MCU/Disney mode, where everything seems over-complicated and planned out in multi-year phases, and everything is so locked in. The MonsterVerse approach is one of ad hoc franchise storytelling, and the result is a universe that feels more free, more open: full of possibilities and untold stories.
The point of all of this, I suppose, is to let us see what works and what doesn’t. As a storyteller or creative type, it helps me to model and test approaches to storytelling of all scales and budgets, as I think about what kinds of narratives I want to develop, and in which media form. Beyond that, though, I think that as we move into a contentscape that muddles the human-made with the computer-generated, this kind of analysis and discussion is more essential than ever.
Notes & References
Still working out the vernacular for the new social web. ↩︎
The first Godzilla film, directed by Ishirō Honda, was also released in 1954 ↩︎
Selvadurai, V., Vistisen, P., & Binns, D. (2022). Bridge Complexity as a Factor in Audience Interaction in Transmedia Storytelling. Journal of Asia-Pacific Pop Culture, 7(1), 85–108 (quote from pages 96-7). https://doi.org/10.5325/jasiapacipopcult.7.1.0085↩︎
Remember the good old days of social media, when we’d all sit around laughing at a Good Tweet™? Me either. Actually, that was never a thing. Photo by Anna Shvets on Pexels.
Originally I was going to post some condensed form of this to socials, but I thought some may be interested in an extended ramble and/or the workflows involved.
I deleted my Twitter last year in a mild fit of ethical superiority. I’d been on the platform some 14 years at that point. At first, I delighted in the novelty of microblogging; short little bursts of thought that people could read through, respond to, re-post themselves. But then, as is now de rigueur for all platforms, things changed. Even before Elon took over, the app started tweaking little bits and pieces, changing the way information was presented, prioritised, and delivered. Come the mid-2010s, it just wasn’t the same any more; by that stage, though, so many people that I knew and/or needed to know of, were using the app. It became something I checked weekly, like all my other social network pages, some blogs, etc. One more feed.
Elon’s takeover, though, seemed like a fitting exit point. Many others felt the same way. I kind of rushed the breakaway, though; I did download all my data, thank the maker, but in terms of flagging the move with people who followed me for various reasons (personal, professional, tracking related declines, etc), I just… didn’t. I set up a Mastodon on the PKM instance, because that was a nice community that I’d found myself in as a positive byproduct of a rather all-encompassing obsession with productivity, life organisation, and information retention/recycling. I’m still on the ‘don (or Masta, per your preference), though I’ve shifted to the main mastodon.social instance to make automation and re-posting easier.
Anyway, to cut to the quick, I rebooted the ol’ Twitter/X/Elon.com account in the last couple of months just to keep track of people who’ve not yet shifted elsewhere.1 What I didn’t manage to do before I shut it down last year, though, was to export/keep record of those 700 odd people I was following, nor did I just transfer them over to Mastodon, which tools like Movetodon allow you to do pretty seamlessly.
Thankfully, buried in the data export was a JavaScript file called “following.js”, which contained IDs and URLs for all the Twitter accounts I’d originally followed. Bear in mind, though, not the Twitter usernames, e.g. @NY152 or @Shopgirl, but rather the ID number that Twitter creates as a stable reference for each user. The user IDs and URLs were also surrounded by all the JavaScript guff2 used to display the info in a readable form:
I have a rudimentary grasp of very basic Python, but JavaScript remains beyond me, so I used the wonderful TextBuddy to remove everything but the URLs, then saved this as a text file. Though string manipulation is a wonderful process, unfortunately the checking of each account remains up to me.
So whenever I have a spare hour, I’ve been sitting down at the computer and copying and pasting a bunch of URLs into the “Open Multiple URLs” Chrome extension. It’s tedious work, obviously. But it’s been really interesting to see a, who is inactive on Twitter and for how long they’ve been so; b, who’s switched to private since Elon or before; c, who’s moved to Masta or elsewhere; and d, who’s still active and how so. It’s also just a great chance to filter out all the rubbish accounts I followed over those fourteen years!
In general terms, anyone with any level of tech knowledge or broad online following has shifted almost entirely to different services, maybe leaving up a link or a pinned post to catch any stray visitors. Probably around 40-50% of them are still active in some way; be that sharing work or thoughts with an established audience, or staying in touch with communities.3 Several of the URLs have hit 404s, which means that user has just deleted their X account entirely; good for you, even though I have no idea who you are/were!
As I develop my thoughts around platforms, algorithms, culture, and so on, reflecting on my own platform use, tech setup, and engagements with data is becoming more than just a hobby; it’s forming a core part of the process. I’ve always struggled to rationalise the counting of my creative work and my personal interests/hobbies with my academic interests. But I think that from now on I just have to accept that there will always be overlap, particularly if I’m to do anything with these ideas, be it write a screenplay or a book, a bunch of blog posts, or anything academical.4
Notes
I also really like that I locked down the @binnsy username before anyone else got to it; there are plenty of Binnses even just in my family who use that nickname! ↩︎
This is obviously prevalent in my field of academia, where so many supportive communities have been established over long periods of time, e.g. #PhDchat etc etc. I realised after I deleted my account that even though I don’t participate anywhere near like I used to, these are such valuable spaces when I do log on, and obviously for countless others. You don’t and can’t just throw that shit away. ↩︎
So much of what I’m being fed at the moment concerns the recent wave of AI. While we are seeing something of a plateauing of the hype cycle, I think (/hope), it’s still very present as an issue, a question, an opportunity, a hope, a fear, a concept. I’ll resist my usual impulse to historicise this last year or two of innovation within the contexts of AI research, which for decades was popularly mocked and institutionally underfunded; I’ll also resist the even stronger impulse to look at AI within the even broader milieu of technology, history, media, and society, which is, apparently, my actual day job.
What I’ll do instead is drop the phrase algorithmic moment, which is what I’ve been trying to explore, define, and work through over the last 18 months. I’m heading back to work next week after an extended period of leave, so this seems as good a way of any as getting my head back into some of the research I left to one side for a while.
The algorithmic moment is what we’re in at the moment. It’s the current AI bubble, hype cycle, growth spurt, whatever you define this wave as (some have dubbed it the AI spring or boom, to distinguish it from various AI winters over the last century1). In trying to bracket it off with concrete times, I’ve settled more or less on the emergence of the GPT-3 Beta in 2020. Of course OpenAI and other AI innovations predated this, but it was GPT-3 and its children ChatGPT and DALL-E 2 that really propelled discussions of AI and its possibilities and challenges into the mainstream.
This also means that much of this moment is swept up with the COVID pandemic. While online life had bled into the real world in interesting ways pre-2020, it was really that year, during urban lockdowns, family zooms, working from home, and a deeply felt global trauma, that online and off felt one and the same. AI innovators capitalised on the moment, seizing capital (financial and cultural) in order to promise a remote revolution built on AI and its now-shunned sibling in discourse, web3 and NFTs.
How AI plugs into the web as a system is a further consideration — prior to this current boom, AI datasets in research were often closed. But OpenAI and its contemporaries used the internet itself as their dataset. All of humanity’s knowledge, writing, ideas, artistic output, fears, hopes, dreams, scraped and plugged into an algorithm, to then be analysed, searched, filtered, reworked at will by anyone.
The downfall of FTX and the trial of Sam Bankman-Fried more or less marked the death knell of NFTs as the Next Big Thing, if not web3 as a broader notion to be deployed across open-source, federated applications. And as NFTs slowly left the tech conversation, as that hype cycle started falling, the AI boom filled the void, such that one can hardly log on to a tech news site or half of the most popular Subs-stack without seeing a diatribe or puff piece (not unlike this very blog post) about the latest development.
ChatGPT has become a hit productivity tool, as well as a boon to students, authors, copy writers and content creators the world over. AI is a headache for many teachers and academics, many of whom fail not only to grasp its actual power and operations, but also how to usefully and constructively implement the technology in class activities and assessment. DALL-E, Midjourney and the like remain controversial phenomena in art and creative communities, where some hail them as invaluable aids, and others debate their ethics and value.
As with all previous revolutions, the dust will settle on that of AI. The research and innovation will continue as it always has, but out of the limelight and away from the headlines. It feels currently like we cannot keep up, that it’s all happening too fast, that if only we slowed down and thought about things, we could try and understand how we’ll be impacted, how everything might change. At the risk of historicising, exactly like I said I wouldn’t, people thought the same of the printing press, the aeroplane, and the computer. In 2002, Andrew Murphie and John Potts were trying to capture the flux and flow and tension and release of culture and technology. They were grappling in particular with the widespread adoption of the internet, and how to bring that into line with other systems and theories of community and communication. Jean-Francois Lyotard had said that new communications networks functioned largely on “language games” between machines and humans. Building on this idea, Murphie and Potts suggested that the information economy “needs us to make unexpected ‘moves’ in these games or it will wind down through a kind of natural attrition. [The information economy] feeds on new patterns and in the process sets up a kind of freedom of movement within it in order to gain access to the new.”2
The information economy has given way, now, to the platform economy. It might be easy, then, to think that the internet is dead and decaying or, at least, kind of withering or atrophying. Similarly, it can be even easier to think that in this locked-down, walled-off, platform- and app-based existence where online and offline are more or less congruent, we are without control. I’ve beendroppingbreadcrumbs over these last few posts as to how we might resist in some small way, if not to the detriment of the system, then at least to the benefit of our own mental states; and I hope to keep doing this in future posts (and over on Mastodon).
For me, the above thoughts have been gestating for a long time, but they remain immature, unpolished; unfiltered which, in its own way, is a form of resistance to the popular image of the opaque black box of algorithmic systems. I am still trying to figure out what to do with them; whether to develop them further into a series of academic articles or a monograph, to just keep posting random bits and bobs here on this site, or to seed them into a creative piece, be it a film, book, or something else entirely. Maybe a little of everything, but I’m in no rush.
As a postscript, I’m also publishing this here to resist another system, that of academic publishing, which is monolithic, glacial, frustrating, and usually hidden behind a paywall for a privileged few. Anyway, I’m not expecting anyone to read this, much less use or cite it in their work, but better it be here if someone needs it than reserved for a privileged few.
As a bookend for the AI-generated image that opened the post, I asked Bard for “a cool sign-off for my blog posts about technology, history, and culture” and it offered the following, so here you go…
Signing off before the robots take over. (Just kidding… maybe.)
Notes
For an excellent history of AI up to around 1990, I can’t recommend enough AI: The Tumultuous History of the Search for Artificial Intelligence by Daniel Crevier. Crevier has made the book available for download via ResearchGate. ↩︎
Her language contains elements from Aeolic vernacular and poetic tradition, with traces of epic vocabulary familiar to readers of Homer. She has the ability to judge critically her own ecstasies and grief, and her emotions lose nothing of their force by being recollected in tranquillity.