The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: internet

  • Swings X Roundabouts

    Remember the good old days of social media, when we’d all sit around laughing at a Good Tweet™? Me either. Actually, that was never a thing. Photo by Anna Shvets on Pexels.

    Originally I was going to post some condensed form of this to socials, but I thought some may be interested in an extended ramble and/or the workflows involved.

    I deleted my Twitter last year in a mild fit of ethical superiority. I’d been on the platform some 14 years at that point. At first, I delighted in the novelty of microblogging; short little bursts of thought that people could read through, respond to, re-post themselves. But then, as is now de rigueur for all platforms, things changed. Even before Elon took over, the app started tweaking little bits and pieces, changing the way information was presented, prioritised, and delivered. Come the mid-2010s, it just wasn’t the same any more; by that stage, though, so many people that I knew and/or needed to know of, were using the app. It became something I checked weekly, like all my other social network pages, some blogs, etc. One more feed.

    Elon’s takeover, though, seemed like a fitting exit point. Many others felt the same way. I kind of rushed the breakaway, though; I did download all my data, thank the maker, but in terms of flagging the move with people who followed me for various reasons (personal, professional, tracking related declines, etc), I just… didn’t. I set up a Mastodon on the PKM instance, because that was a nice community that I’d found myself in as a positive byproduct of a rather all-encompassing obsession with productivity, life organisation, and information retention/recycling. I’m still on the ‘don (or Masta, per your preference), though I’ve shifted to the main mastodon.social instance to make automation and re-posting easier.

    Anyway, to cut to the quick, I rebooted the ol’ Twitter/X/Elon.com account in the last couple of months just to keep track of people who’ve not yet shifted elsewhere.1 What I didn’t manage to do before I shut it down last year, though, was to export/keep record of those 700 odd people I was following, nor did I just transfer them over to Mastodon, which tools like Movetodon allow you to do pretty seamlessly.

    Thankfully, buried in the data export was a JavaScript file called “following.js”, which contained IDs and URLs for all the Twitter accounts I’d originally followed. Bear in mind, though, not the Twitter usernames, e.g. @NY152 or @Shopgirl, but rather the ID number that Twitter creates as a stable reference for each user. The user IDs and URLs were also surrounded by all the JavaScript guff2 used to display the info in a readable form:

    {
    "following": {
    "accountId": "123456",
    "userLink": "https://twitter.com/intent/user?user_id=123456"
    }
    },
    {
    "following": {
    "accountId": "789012",
    "userLink": "https://twitter.com/intent/user?user_id=789012"
    }
    },
    {
    "following": {
    "accountId": "345678",
    "userLink": "https://twitter.com/intent/user?user_id=345678"
    }
    },

    I have a rudimentary grasp of very basic Python, but JavaScript remains beyond me, so I used the wonderful TextBuddy to remove everything but the URLs, then saved this as a text file. Though string manipulation is a wonderful process, unfortunately the checking of each account remains up to me.

    So whenever I have a spare hour, I’ve been sitting down at the computer and copying and pasting a bunch of URLs into the “Open Multiple URLs” Chrome extension. It’s tedious work, obviously. But it’s been really interesting to see a, who is inactive on Twitter and for how long they’ve been so; b, who’s switched to private since Elon or before; c, who’s moved to Masta or elsewhere; and d, who’s still active and how so. It’s also just a great chance to filter out all the rubbish accounts I followed over those fourteen years!

    In general terms, anyone with any level of tech knowledge or broad online following has shifted almost entirely to different services, maybe leaving up a link or a pinned post to catch any stray visitors. Probably around 40-50% of them are still active in some way; be that sharing work or thoughts with an established audience, or staying in touch with communities.3 Several of the URLs have hit 404s, which means that user has just deleted their X account entirely; good for you, even though I have no idea who you are/were!

    As I develop my thoughts around platforms, algorithms, culture, and so on, reflecting on my own platform use, tech setup, and engagements with data is becoming more than just a hobby; it’s forming a core part of the process. I’ve always struggled to rationalise the counting of my creative work and my personal interests/hobbies with my academic interests. But I think that from now on I just have to accept that there will always be overlap, particularly if I’m to do anything with these ideas, be it write a screenplay or a book, a bunch of blog posts, or anything academical.4


    Notes

    1. I also really like that I locked down the @binnsy username before anyone else got to it; there are plenty of Binnses even just in my family who use that nickname! ↩︎
    2. Guff is the technical term, obviously. ↩︎
    3. This is obviously prevalent in my field of academia, where so many supportive communities have been established over long periods of time, e.g. #PhDchat etc etc. I realised after I deleted my account that even though I don’t participate anywhere near like I used to, these are such valuable spaces when I do log on, and obviously for countless others. You don’t and can’t just throw that shit away. ↩︎
    4. You heard me. ↩︎
  • This algorithmic moment

    Generated by Leonardo AI; prompts by me.

    So much of what I’m being fed at the moment concerns the recent wave of AI. While we are seeing something of a plateauing of the hype cycle, I think (/hope), it’s still very present as an issue, a question, an opportunity, a hope, a fear, a concept. I’ll resist my usual impulse to historicise this last year or two of innovation within the contexts of AI research, which for decades was popularly mocked and institutionally underfunded; I’ll also resist the even stronger impulse to look at AI within the even broader milieu of technology, history, media, and society, which is, apparently, my actual day job.

    What I’ll do instead is drop the phrase algorithmic moment, which is what I’ve been trying to explore, define, and work through over the last 18 months. I’m heading back to work next week after an extended period of leave, so this seems as good a way of any as getting my head back into some of the research I left to one side for a while.

    The algorithmic moment is what we’re in at the moment. It’s the current AI bubble, hype cycle, growth spurt, whatever you define this wave as (some have dubbed it the AI spring or boom, to distinguish it from various AI winters over the last century1). In trying to bracket it off with concrete times, I’ve settled more or less on the emergence of the GPT-3 Beta in 2020. Of course OpenAI and other AI innovations predated this, but it was GPT-3 and its children ChatGPT and DALL-E 2 that really propelled discussions of AI and its possibilities and challenges into the mainstream.

    This also means that much of this moment is swept up with the COVID pandemic. While online life had bled into the real world in interesting ways pre-2020, it was really that year, during urban lockdowns, family zooms, working from home, and a deeply felt global trauma, that online and off felt one and the same. AI innovators capitalised on the moment, seizing capital (financial and cultural) in order to promise a remote revolution built on AI and its now-shunned sibling in discourse, web3 and NFTs.

    How AI plugs into the web as a system is a further consideration — prior to this current boom, AI datasets in research were often closed. But OpenAI and its contemporaries used the internet itself as their dataset. All of humanity’s knowledge, writing, ideas, artistic output, fears, hopes, dreams, scraped and plugged into an algorithm, to then be analysed, searched, filtered, reworked at will by anyone.

    The downfall of FTX and the trial of Sam Bankman-Fried more or less marked the death knell of NFTs as the Next Big Thing, if not web3 as a broader notion to be deployed across open-source, federated applications. And as NFTs slowly left the tech conversation, as that hype cycle started falling, the AI boom filled the void, such that one can hardly log on to a tech news site or half of the most popular Subs-stack without seeing a diatribe or puff piece (not unlike this very blog post) about the latest development.

    ChatGPT has become a hit productivity tool, as well as a boon to students, authors, copy writers and content creators the world over. AI is a headache for many teachers and academics, many of whom fail not only to grasp its actual power and operations, but also how to usefully and constructively implement the technology in class activities and assessment. DALL-E, Midjourney and the like remain controversial phenomena in art and creative communities, where some hail them as invaluable aids, and others debate their ethics and value.

    As with all previous revolutions, the dust will settle on that of AI. The research and innovation will continue as it always has, but out of the limelight and away from the headlines. It feels currently like we cannot keep up, that it’s all happening too fast, that if only we slowed down and thought about things, we could try and understand how we’ll be impacted, how everything might change. At the risk of historicising, exactly like I said I wouldn’t, people thought the same of the printing press, the aeroplane, and the computer. In 2002, Andrew Murphie and John Potts were trying to capture the flux and flow and tension and release of culture and technology. They were grappling in particular with the widespread adoption of the internet, and how to bring that into line with other systems and theories of community and communication. Jean-Francois Lyotard had said that new communications networks functioned largely on “language games” between machines and humans. Building on this idea, Murphie and Potts suggested that the information economy “needs us to make unexpected ‘moves’ in these games or it will wind down through a kind of natural attrition. [The information economy] feeds on new patterns and in the process sets up a kind of freedom of movement within it in order to gain access to the new.”2

    The information economy has given way, now, to the platform economy. It might be easy, then, to think that the internet is dead and decaying or, at least, kind of withering or atrophying. Similarly, it can be even easier to think that in this locked-down, walled-off, platform- and app-based existence where online and offline are more or less congruent, we are without control. I’ve been dropping breadcrumbs over these last few posts as to how we might resist in some small way, if not to the detriment of the system, then at least to the benefit of our own mental states; and I hope to keep doing this in future posts (and over on Mastodon).

    For me, the above thoughts have been gestating for a long time, but they remain immature, unpolished; unfiltered which, in its own way, is a form of resistance to the popular image of the opaque black box of algorithmic systems. I am still trying to figure out what to do with them; whether to develop them further into a series of academic articles or a monograph, to just keep posting random bits and bobs here on this site, or to seed them into a creative piece, be it a film, book, or something else entirely. Maybe a little of everything, but I’m in no rush.

    As a postscript, I’m also publishing this here to resist another system, that of academic publishing, which is monolithic, glacial, frustrating, and usually hidden behind a paywall for a privileged few. Anyway, I’m not expecting anyone to read this, much less use or cite it in their work, but better it be here if someone needs it than reserved for a privileged few.

    As a bookend for the AI-generated image that opened the post, I asked Bard for “a cool sign-off for my blog posts about technology, history, and culture” and it offered the following, so here you go…

    Signing off before the robots take over. (Just kidding… maybe.)


    Notes

    1. For an excellent history of AI up to around 1990, I can’t recommend enough AI: The Tumultuous History of the Search for Artificial Intelligence by Daniel Crevier. Crevier has made the book available for download via ResearchGate. ↩︎
    2. Murphie, Andrew, and John Potts. 2003. Culture and Technology. London: Macmillan Education UK, p. 208. https://doi.org/10.1007/978-1-137-08938-0. ↩︎

Her language contains elements from Aeolic vernacular and poetic tradition, with traces of epic vocabulary familiar to readers of Homer. She has the ability to judge critically her own ecstasies and grief, and her emotions lose nothing of their force by being recollected in tranquillity.

Marble statue of Sappho on side profile.

Designed with WordPress