The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: algorithms

  • This algorithmic moment

    Generated by Leonardo AI; prompts by me.

    So much of what I’m being fed at the moment concerns the recent wave of AI. While we are seeing something of a plateauing of the hype cycle, I think (/hope), it’s still very present as an issue, a question, an opportunity, a hope, a fear, a concept. I’ll resist my usual impulse to historicise this last year or two of innovation within the contexts of AI research, which for decades was popularly mocked and institutionally underfunded; I’ll also resist the even stronger impulse to look at AI within the even broader milieu of technology, history, media, and society, which is, apparently, my actual day job.

    What I’ll do instead is drop the phrase algorithmic moment, which is what I’ve been trying to explore, define, and work through over the last 18 months. I’m heading back to work next week after an extended period of leave, so this seems as good a way of any as getting my head back into some of the research I left to one side for a while.

    The algorithmic moment is what we’re in at the moment. It’s the current AI bubble, hype cycle, growth spurt, whatever you define this wave as (some have dubbed it the AI spring or boom, to distinguish it from various AI winters over the last century1). In trying to bracket it off with concrete times, I’ve settled more or less on the emergence of the GPT-3 Beta in 2020. Of course OpenAI and other AI innovations predated this, but it was GPT-3 and its children ChatGPT and DALL-E 2 that really propelled discussions of AI and its possibilities and challenges into the mainstream.

    This also means that much of this moment is swept up with the COVID pandemic. While online life had bled into the real world in interesting ways pre-2020, it was really that year, during urban lockdowns, family zooms, working from home, and a deeply felt global trauma, that online and off felt one and the same. AI innovators capitalised on the moment, seizing capital (financial and cultural) in order to promise a remote revolution built on AI and its now-shunned sibling in discourse, web3 and NFTs.

    How AI plugs into the web as a system is a further consideration — prior to this current boom, AI datasets in research were often closed. But OpenAI and its contemporaries used the internet itself as their dataset. All of humanity’s knowledge, writing, ideas, artistic output, fears, hopes, dreams, scraped and plugged into an algorithm, to then be analysed, searched, filtered, reworked at will by anyone.

    The downfall of FTX and the trial of Sam Bankman-Fried more or less marked the death knell of NFTs as the Next Big Thing, if not web3 as a broader notion to be deployed across open-source, federated applications. And as NFTs slowly left the tech conversation, as that hype cycle started falling, the AI boom filled the void, such that one can hardly log on to a tech news site or half of the most popular Subs-stack without seeing a diatribe or puff piece (not unlike this very blog post) about the latest development.

    ChatGPT has become a hit productivity tool, as well as a boon to students, authors, copy writers and content creators the world over. AI is a headache for many teachers and academics, many of whom fail not only to grasp its actual power and operations, but also how to usefully and constructively implement the technology in class activities and assessment. DALL-E, Midjourney and the like remain controversial phenomena in art and creative communities, where some hail them as invaluable aids, and others debate their ethics and value.

    As with all previous revolutions, the dust will settle on that of AI. The research and innovation will continue as it always has, but out of the limelight and away from the headlines. It feels currently like we cannot keep up, that it’s all happening too fast, that if only we slowed down and thought about things, we could try and understand how we’ll be impacted, how everything might change. At the risk of historicising, exactly like I said I wouldn’t, people thought the same of the printing press, the aeroplane, and the computer. In 2002, Andrew Murphie and John Potts were trying to capture the flux and flow and tension and release of culture and technology. They were grappling in particular with the widespread adoption of the internet, and how to bring that into line with other systems and theories of community and communication. Jean-Francois Lyotard had said that new communications networks functioned largely on “language games” between machines and humans. Building on this idea, Murphie and Potts suggested that the information economy “needs us to make unexpected ‘moves’ in these games or it will wind down through a kind of natural attrition. [The information economy] feeds on new patterns and in the process sets up a kind of freedom of movement within it in order to gain access to the new.”2

    The information economy has given way, now, to the platform economy. It might be easy, then, to think that the internet is dead and decaying or, at least, kind of withering or atrophying. Similarly, it can be even easier to think that in this locked-down, walled-off, platform- and app-based existence where online and offline are more or less congruent, we are without control. I’ve been dropping breadcrumbs over these last few posts as to how we might resist in some small way, if not to the detriment of the system, then at least to the benefit of our own mental states; and I hope to keep doing this in future posts (and over on Mastodon).

    For me, the above thoughts have been gestating for a long time, but they remain immature, unpolished; unfiltered which, in its own way, is a form of resistance to the popular image of the opaque black box of algorithmic systems. I am still trying to figure out what to do with them; whether to develop them further into a series of academic articles or a monograph, to just keep posting random bits and bobs here on this site, or to seed them into a creative piece, be it a film, book, or something else entirely. Maybe a little of everything, but I’m in no rush.

    As a postscript, I’m also publishing this here to resist another system, that of academic publishing, which is monolithic, glacial, frustrating, and usually hidden behind a paywall for a privileged few. Anyway, I’m not expecting anyone to read this, much less use or cite it in their work, but better it be here if someone needs it than reserved for a privileged few.

    As a bookend for the AI-generated image that opened the post, I asked Bard for “a cool sign-off for my blog posts about technology, history, and culture” and it offered the following, so here you go…

    Signing off before the robots take over. (Just kidding… maybe.)


    Notes

    1. For an excellent history of AI up to around 1990, I can’t recommend enough AI: The Tumultuous History of the Search for Artificial Intelligence by Daniel Crevier. Crevier has made the book available for download via ResearchGate. ↩︎
    2. Murphie, Andrew, and John Potts. 2003. Culture and Technology. London: Macmillan Education UK, p. 208. https://doi.org/10.1007/978-1-137-08938-0. ↩︎
  • Critics and creation

    Photo by Leah Newhouse on Pexels.

    I started reading this interview this morning, between Anne Helen Peterson and Betsy Gaines Quammen. I still haven’t finished reading, despite being utterly fascinated, but even before I got to the guts of the interview, I was struck by a thought:

    In the algorithmised world, the creator is the critic.

    This thought is not necessarily happening in isolation; I’ve been thinking about ‘algorithmic culture’ for a couple of years, trying to order these thoughts into academic writing, or even creative writing. But this thought feels like a step in the right direction, even if I’ve no idea what the final output should or will be. Let’s scribble out some notes…

    If there’s someone whose work we enjoy, they’ll probably have an online presence — a blog or social media feed we can follow — where they’ll share what they like.

    It’s an organic kind of culture — but it’s one where the art and vocation of the critic continues to be minimised.

    This — and associated phenomena — is the subject of a whole bunch of recent and upcoming books (including this one, which is at the top of my to-read pile for the next month): a kind of culture where the all-powerful algorithm becomes the sole arbiter of taste, but I also think there is pressure on creatives to be their own kind of critical and cultural hub.

    On the inverse, what we may traditionally have called critics — so modern-day social media commentators, influencers, your Booktubers or Booktokkers, your video essayists and their ilk — now also feel pressure to create. This pressure will come from their followers and acolytes, but also from random people who encounter them online, who will say something like “if you know so much why don’t you just do it yourself” etc etc…

    Some critics will leap at the opportunity and they absolutely should — we are hearing from diverse voices that wouldn’t otherwise have thought to try.

    But some should leave the creation to others — not because they’re not worth hearing from, they absolutely are — but because their value, their creativity, their strength, lies in how they shape language, images, metaphor, around the work of others. They don’t realise — as I didn’t for a long time — that being a critic is a vocation, a life’s work, a real skill. Look at any longer-form piece in the London Review of Books or The New Inquiry and it becomes very clear how valuable this work is.

    I’ve always loved the term critic, particularly cultural critic, or commentator, or essayist… they always seemed like wonderful archaic terms that don’t belong in the modern, fragmented, divided, confused world. But to call oneself a critic or essayist, to own that, and only that, is to defy the norms of culture; to refuse the ‘pillars’ of novel, film, press/journalism, and to stand to one side, giving much-needed perspective to how these archaic forms define, reflect, and challenge society.

  • Shift Lock #3: A sales pitch for the tepid take

    After ‘abandoning’ the blog part of this site in early 2022, I embarked on a foolish newsletter endeavour called Shift Lock. It was fun and/or sustainable for a handful of posts, but then life got in the way. Over the next little while I’ll re-post those ruminations here for posterity. Errors and omissions my own. This instalment was published May 5, 2022 (see all Shift Lock posts here).


    Photo by Pixabay on Pexels.com

    Twitter was already a corporate entity, and had been struggling with how to market and position itself anyway. Not to mention, its free speech woes — irrevocably tied to those of its competitors — are not surprising. If anything, Mr. Musk was something of a golden ticket: someone to hand everything over to.

    The influx/exodus cycle started before the news was official… Muskovites joined/returned to Twitter in droves, opponents found scrolls bearing ancient Mastodon tutorials and set up their own mini-networks (let’s leave that irony steaming in the corner for now).

    None of this is new: businesses are bought and sold all the time, the right to free speech is never unconditional (and nor should it be), and the general populace move and shift and migrate betwixt different services, platforms, apps, and spaces all the time.

    What seems new, or at least different, about these latter media trends, issues, events, is the sheer volume of coverage they receive. What tends to happen with news from media industries (be they creative, social, or otherwise) is wall-to-wall coverage for a given week or two, before things peter out and we move on to the next block. It seems that online culture operates at two speeds: an instantaneous, rolling, roiling stream of chaos; and a broader, slightly slower rise and fall, where you can actually see trends come and go across a given time period. Taking the Oscars slap as an example: maybe that rise and fall lasts a week. Sometimes it might last two to four, as in the case of Musk and Twitter.

    How, then, do we consider or position these two speeds in broader ‘culture’?

    Like all of the aforementioned, Trump was not a new phenomenon. Populism was a tried and tested political strategy in 2015-16; just, admittedly, a strategy that many of us hoped had faded into obsolescence. However, true to the 20-30 year cycle of such things, Trump emerged. And while his wings were — mostly — clipped by the checks and balances of the over-complex American political system, the real legacy of his reign is our current post-truth moment. And that legacy is exemplified by a classic communications strategy: jamming. Jam the airwaves for a week, so everyone is talking about only one thing. Distract everyone from deeper issues that need work.

    This jamming doesn’t necessary come from politicians, from strategists, from agencies, as it may once have done. Rather, it comes from a conversational consensus emerging from platforms — and this consensus is most likely algorithmically-driven. That’s the real concern. And as much as Musk may want to open up the doors and release the code, it’s really not that straightforward.

    The algorithms behind social media platforms are complex — more than that, they are nested, like a kind of digital Rube Goldberg machine. People working on one section of the code are not aware nor comprehending of what other teams might be working on, beyond any do-not-disturb-type directives from on high. As scholar Nick Seaver says in a recent Washington Post piece, “The people inside Twitter want to understand how their algorithm works, too.” (Albergotti 2022)

    Algorithms — at least those employed by companies like Twitter — are built to stoke the fires of engagement. And there ain’t no gasoline like reactions, like outrage, like whatever the ‘big thing’ is for that particular week. These wildfires also intersect with the broader culture in ways that it takes longer-form criticism (I would say academic scholarship, but we often miss the mark, or more accurately, due to glacial peer review turnarounds, the boat) to meaningfully engage and understand.

    Thanks partly to COVID but also to general mental health stuff, I’ve been on a weird journey with social media (and news, to be fair) over the past 3-5 years. Occasional sabbaticals have certainly helped, but increasingly I’m just not checking it. This year I’ve found more and more writers and commentators whose long-form work I appreciate as a way of keeping across things, but also just for slightly more measured takes. Tepid takes. Not like a spa but more like a heated pool. This is partly why I started this newsletter-based journey, just to let myself think things through in a way that didn’t need to be posted immediately, but nor did I need to wait months/years for peer review. Somewhere beyond even the second trend-based speed I mentioned above.

    What it really lets me do, though, is disengage from the constant flow of algorithmically-driven media, opinion, reaction, and so on, in a way where I can still do that thinking in a relevant and appropriate way. What I’m hoping is that this kind of distance lets me turn around and observe that flow in new and interesting ways.


    Below the divider

    At the end of each post I link a few sites, posts, articles, videos that have piqued my interest of late. Some are connected to my research, some to teaching and other parts of academia, still others are… significantly less so (let’s keep some fun going, shall we?).


    Reed Albergotti (2022, 16 April), ‘Elon Musk wants Twitter’s algorithm to be public. It’s not that simple.’ Washington Post.