Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.
I had planned this week to post something very different, but in light of the way my week has panned out, I’m feeling all of the following much more keenly, particularly in the wake of some of my rants about social media and platforms and such like.
My primary computer, for nearly ten years, was a 2014 Mac with Retina display. It was a beautiful beast, and served me very well, particularly through lockdowns when there were some issues getting an external monitor for my work laptop. Come early 2023, though, it was showing signs of wear and tear. I didn’t really want to fork out for a new machine when I had a perfectly good laptop from work, so I let it go out to pasture (the Apple Store).
After some time off work last year, though, I wanted to put some effort into separating personal files from my work stuff. Up until this point, I had used a cloud service for everything, without a backup (shock horror). I do realise that somehow physically separating out machines and hard drives for work and non-work is fairly redundant in this age of clouds, but doing the actual labour of downloading from the cloud service, then separating out folders onto hard drives, machines, then backing everything up appropriately, was not a little therapeutic.
Having carved out the workspace on the newer Macbook, I was left wondering what to do for a personal machine. There is always, obviously, the desire to rush out and drop a great deal of money on the latest model, but for various reasons, this is not currently a possibility for me. Aside from that, I’m surrounded by old tech, left in cupboards, not yet eBayed or traded in or taken away for recycling. I am aware and conscious enough of the horrific impact of e-waste, and with my recent interests in a smaller, more intimate, cosy, sustainable internet, I thought that maybe this would be a chance to put my money where my mouth is. If you can’t be with the sleek new tech you love, honey, love the slightly chunkier, dustier tech you’re with.
Plus, I’ve always loved the idea of tinkering with tech, even if I’ve never actually done anything like this properly. My plans aren’t unachievable nor overly ambitious; I have two computers, broadly defined, that I hope to revive and use in tandem as a kind of compound personal machine. The first of these is a Raspberry Pi 3B+, the second a mid-late 2011 model Macbook Pro: the last Macbook I owned that wasn’t a work device.
Come along with me, won’t you, on this journey of learning and self-discovery? Coming tomorrow (or at least in the coming days)… Part 2: Mmm, Pi.
Remember the good old days of social media, when we’d all sit around laughing at a Good Tweet™? Me either. Actually, that was never a thing. Photo by Anna Shvets on Pexels.
Originally I was going to post some condensed form of this to socials, but I thought some may be interested in an extended ramble and/or the workflows involved.
I deleted my Twitter last year in a mild fit of ethical superiority. I’d been on the platform some 14 years at that point. At first, I delighted in the novelty of microblogging; short little bursts of thought that people could read through, respond to, re-post themselves. But then, as is now de rigueur for all platforms, things changed. Even before Elon took over, the app started tweaking little bits and pieces, changing the way information was presented, prioritised, and delivered. Come the mid-2010s, it just wasn’t the same any more; by that stage, though, so many people that I knew and/or needed to know of, were using the app. It became something I checked weekly, like all my other social network pages, some blogs, etc. One more feed.
Elon’s takeover, though, seemed like a fitting exit point. Many others felt the same way. I kind of rushed the breakaway, though; I did download all my data, thank the maker, but in terms of flagging the move with people who followed me for various reasons (personal, professional, tracking related declines, etc), I just… didn’t. I set up a Mastodon on the PKM instance, because that was a nice community that I’d found myself in as a positive byproduct of a rather all-encompassing obsession with productivity, life organisation, and information retention/recycling. I’m still on the ‘don (or Masta, per your preference), though I’ve shifted to the main mastodon.social instance to make automation and re-posting easier.
Anyway, to cut to the quick, I rebooted the ol’ Twitter/X/Elon.com account in the last couple of months just to keep track of people who’ve not yet shifted elsewhere.1 What I didn’t manage to do before I shut it down last year, though, was to export/keep record of those 700 odd people I was following, nor did I just transfer them over to Mastodon, which tools like Movetodon allow you to do pretty seamlessly.
Thankfully, buried in the data export was a JavaScript file called “following.js”, which contained IDs and URLs for all the Twitter accounts I’d originally followed. Bear in mind, though, not the Twitter usernames, e.g. @NY152 or @Shopgirl, but rather the ID number that Twitter creates as a stable reference for each user. The user IDs and URLs were also surrounded by all the JavaScript guff2 used to display the info in a readable form:
I have a rudimentary grasp of very basic Python, but JavaScript remains beyond me, so I used the wonderful TextBuddy to remove everything but the URLs, then saved this as a text file. Though string manipulation is a wonderful process, unfortunately the checking of each account remains up to me.
So whenever I have a spare hour, I’ve been sitting down at the computer and copying and pasting a bunch of URLs into the “Open Multiple URLs” Chrome extension. It’s tedious work, obviously. But it’s been really interesting to see a, who is inactive on Twitter and for how long they’ve been so; b, who’s switched to private since Elon or before; c, who’s moved to Masta or elsewhere; and d, who’s still active and how so. It’s also just a great chance to filter out all the rubbish accounts I followed over those fourteen years!
In general terms, anyone with any level of tech knowledge or broad online following has shifted almost entirely to different services, maybe leaving up a link or a pinned post to catch any stray visitors. Probably around 40-50% of them are still active in some way; be that sharing work or thoughts with an established audience, or staying in touch with communities.3 Several of the URLs have hit 404s, which means that user has just deleted their X account entirely; good for you, even though I have no idea who you are/were!
As I develop my thoughts around platforms, algorithms, culture, and so on, reflecting on my own platform use, tech setup, and engagements with data is becoming more than just a hobby; it’s forming a core part of the process. I’ve always struggled to rationalise the counting of my creative work and my personal interests/hobbies with my academic interests. But I think that from now on I just have to accept that there will always be overlap, particularly if I’m to do anything with these ideas, be it write a screenplay or a book, a bunch of blog posts, or anything academical.4
Notes
I also really like that I locked down the @binnsy username before anyone else got to it; there are plenty of Binnses even just in my family who use that nickname! ↩︎
This is obviously prevalent in my field of academia, where so many supportive communities have been established over long periods of time, e.g. #PhDchat etc etc. I realised after I deleted my account that even though I don’t participate anywhere near like I used to, these are such valuable spaces when I do log on, and obviously for countless others. You don’t and can’t just throw that shit away. ↩︎
So much of what I’m being fed at the moment concerns the recent wave of AI. While we are seeing something of a plateauing of the hype cycle, I think (/hope), it’s still very present as an issue, a question, an opportunity, a hope, a fear, a concept. I’ll resist my usual impulse to historicise this last year or two of innovation within the contexts of AI research, which for decades was popularly mocked and institutionally underfunded; I’ll also resist the even stronger impulse to look at AI within the even broader milieu of technology, history, media, and society, which is, apparently, my actual day job.
What I’ll do instead is drop the phrase algorithmic moment, which is what I’ve been trying to explore, define, and work through over the last 18 months. I’m heading back to work next week after an extended period of leave, so this seems as good a way of any as getting my head back into some of the research I left to one side for a while.
The algorithmic moment is what we’re in at the moment. It’s the current AI bubble, hype cycle, growth spurt, whatever you define this wave as (some have dubbed it the AI spring or boom, to distinguish it from various AI winters over the last century1). In trying to bracket it off with concrete times, I’ve settled more or less on the emergence of the GPT-3 Beta in 2020. Of course OpenAI and other AI innovations predated this, but it was GPT-3 and its children ChatGPT and DALL-E 2 that really propelled discussions of AI and its possibilities and challenges into the mainstream.
This also means that much of this moment is swept up with the COVID pandemic. While online life had bled into the real world in interesting ways pre-2020, it was really that year, during urban lockdowns, family zooms, working from home, and a deeply felt global trauma, that online and off felt one and the same. AI innovators capitalised on the moment, seizing capital (financial and cultural) in order to promise a remote revolution built on AI and its now-shunned sibling in discourse, web3 and NFTs.
How AI plugs into the web as a system is a further consideration — prior to this current boom, AI datasets in research were often closed. But OpenAI and its contemporaries used the internet itself as their dataset. All of humanity’s knowledge, writing, ideas, artistic output, fears, hopes, dreams, scraped and plugged into an algorithm, to then be analysed, searched, filtered, reworked at will by anyone.
The downfall of FTX and the trial of Sam Bankman-Fried more or less marked the death knell of NFTs as the Next Big Thing, if not web3 as a broader notion to be deployed across open-source, federated applications. And as NFTs slowly left the tech conversation, as that hype cycle started falling, the AI boom filled the void, such that one can hardly log on to a tech news site or half of the most popular Subs-stack without seeing a diatribe or puff piece (not unlike this very blog post) about the latest development.
ChatGPT has become a hit productivity tool, as well as a boon to students, authors, copy writers and content creators the world over. AI is a headache for many teachers and academics, many of whom fail not only to grasp its actual power and operations, but also how to usefully and constructively implement the technology in class activities and assessment. DALL-E, Midjourney and the like remain controversial phenomena in art and creative communities, where some hail them as invaluable aids, and others debate their ethics and value.
As with all previous revolutions, the dust will settle on that of AI. The research and innovation will continue as it always has, but out of the limelight and away from the headlines. It feels currently like we cannot keep up, that it’s all happening too fast, that if only we slowed down and thought about things, we could try and understand how we’ll be impacted, how everything might change. At the risk of historicising, exactly like I said I wouldn’t, people thought the same of the printing press, the aeroplane, and the computer. In 2002, Andrew Murphie and John Potts were trying to capture the flux and flow and tension and release of culture and technology. They were grappling in particular with the widespread adoption of the internet, and how to bring that into line with other systems and theories of community and communication. Jean-Francois Lyotard had said that new communications networks functioned largely on “language games” between machines and humans. Building on this idea, Murphie and Potts suggested that the information economy “needs us to make unexpected ‘moves’ in these games or it will wind down through a kind of natural attrition. [The information economy] feeds on new patterns and in the process sets up a kind of freedom of movement within it in order to gain access to the new.”2
The information economy has given way, now, to the platform economy. It might be easy, then, to think that the internet is dead and decaying or, at least, kind of withering or atrophying. Similarly, it can be even easier to think that in this locked-down, walled-off, platform- and app-based existence where online and offline are more or less congruent, we are without control. I’ve beendroppingbreadcrumbs over these last few posts as to how we might resist in some small way, if not to the detriment of the system, then at least to the benefit of our own mental states; and I hope to keep doing this in future posts (and over on Mastodon).
For me, the above thoughts have been gestating for a long time, but they remain immature, unpolished; unfiltered which, in its own way, is a form of resistance to the popular image of the opaque black box of algorithmic systems. I am still trying to figure out what to do with them; whether to develop them further into a series of academic articles or a monograph, to just keep posting random bits and bobs here on this site, or to seed them into a creative piece, be it a film, book, or something else entirely. Maybe a little of everything, but I’m in no rush.
As a postscript, I’m also publishing this here to resist another system, that of academic publishing, which is monolithic, glacial, frustrating, and usually hidden behind a paywall for a privileged few. Anyway, I’m not expecting anyone to read this, much less use or cite it in their work, but better it be here if someone needs it than reserved for a privileged few.
As a bookend for the AI-generated image that opened the post, I asked Bard for “a cool sign-off for my blog posts about technology, history, and culture” and it offered the following, so here you go…
Signing off before the robots take over. (Just kidding… maybe.)
Notes
For an excellent history of AI up to around 1990, I can’t recommend enough AI: The Tumultuous History of the Search for Artificial Intelligence by Daniel Crevier. Crevier has made the book available for download via ResearchGate. ↩︎
I spent 2023 learning a great deal about myself. I know everyone always says that around this time of year, but in my case it’s true on a personal, psychological, physiological and personal level. Leaving all of that to one side, it’s also the year that I devoted the most time (too much?) to finding and building a system of notetaking, resource- and time-keeping, and knowledge management that really worked for me.
At the end of the year I’ve managed to consolidate everything down to a handful of tools:
Readwise & Readwise Reader (highlights, literature notes, read-later)
Raindrop (bookmarks, sorted and organised per life/work commitments, e.g. research, writing, story resources, health, fun stuff)
Todoist (task management)
Day One (private journaling, morning pages, reflections, mood tracking)
IFTTT (general app connections and automation)
I pay for premium versions of all of the above; partly because it keeps me accountable for what I’m using and doing, but also because I like the apps, have always had great support from their teams, and think they’re products worth supporting, so that those who maybe can’t afford to pay, can still use.
Project management remains an issue, but I think I’ve finally accepted that I might just have to delegate or outsource some of that, somewhere, somehow.
Other processes I tried and let go of this year include Notion, bullet journaling, and a variety of other apps like Zapier, ClickUp and Inoreader. I had tried many of these before, but this was a proper test to see if they could be worked into and add value to the system.
Like many things in life, you’ll hear a million ways to ‘do’ productivity, and you’ll listen to a few key phrases, but you won’t ever take them in, or implement them. The main one for me was ‘ignore every other system and work on your own’. This isn’t to say you shouldn’t check out what others have done, but you cannot and should not then immediately try to copy most of their system.
I would fall into this trap a lot. It begins with watching a great video by Nicole van der Hoeven, or FromSergio, or even letting out a little squeal when Python Programmer jumps on the Obsidian bandwagon (look, one day I’ll learn Python, but 2024-5 probably isn’t it). You then dive into the description, download every Obsidian plugin they mention, immediately change the frontmatter and template of every current and future note, then tweak your Notion or your Todoist or your calendar or your bullet journal to exactly mirror the Perfect System that this Productivity God hath wrought.
But of course, none of the systems are perfect. I mean, they might be perfect for Nicole or Sergio or Giles at the time, but these folx are almost certainly tweaking, adjusting, and refining constantly, not to mention that they are informational content creators: they might present a cool method or system that they’ve come across, but they also plainly state in their videos that it might not be for everyone.
Cherry-picking the bits of different systems that work for me has been a game-changer, as has case-based or small scale testing. It sounds so simple when I type it out like that, and is basically the ethos of every ethical/responsible/sensible experiment ever, but for me, it’s taken some time to really internalise these ideas. In my case, my system/s will never be perfect, because there is no perfect. You just plug away, do the best you can, and try not to let too much obsession with shiny things get in the way of actually working on what you need to work on.
Organising my notes isn’t my job. Tweaking my frontmatter isn’t my passion. I won’t get promoted for nailing the GTD workflow in Todoist, nor will I feel a warm glow at the end of the day by removing extraneous apps from my phone. For me, if it ain’t broke, I don’t need to lose time trying to fix it. If I find myself obsessing, maybe it’s just time to step away, go and look at a tree, read a book, or play some music.
My system works for now. I enjoy reading about systems and how other people are thriving, and might take the odd piece of advice on board here and there. But for 2024, my goal isn’t the system; nor is it using my system to be productive. My main goal for 2024 is to be just productive enough, wherever I need to be, to try living for a change.
“Hipster style bearded man taking selfie with selfie stick.” – actual description from Shutterstock. Click to see full copyright details and purchase a high-res non-watermarked version, if that’s really your bag.
Today I had the pleasure of attending the RMIT nonfictionLab‘s symposium on interactive documentary. A great many interesting talks were given, and I’m hoping to collate some of my notes into coherent ramblings here and elsewhere over the coming days.
I was reading various tweets today, watching some of the presentations at the conference, and ruminating more generally on photography, mobile media and the ‘self’. As something of a disclaimer, I abhor selfie sticks. I find their presence and purpose incomprehensible, and the people who use them (for the most part) arrogant and, possibly appropriately, self-absorbed.
In spite of this, my mind kept returning to them today, in light of some of the discussion around ‘autodocumentary’. In using our smart devices to track and photograph and record and measure every movement we make, we are, in a sense, creating a narrative; a documentary of our lives.
The ‘selfie stick’, ostensibly, aids in the act of taking ‘selfies’, or photographs of the photographer. The ‘selfie’ finds its origins in the ‘fridge shot’: an often poorly-composed, over-exposed photograph of the photographer and one or several other people. I find this origin important, given that the current ‘selfie’ is a refined and technologically-improved (allegedly) version of the earlier iteration.
What struck me today is that the ‘selfie stick’, by its nature, is a step in a weird direction. Physically, the device distances the camera from the ‘self’, allowing a modicum of control over the composition and quality of the resulting artefact. I think it could be argued, then, that the selfie stick does not create ‘selfies’ as we have come to know them. A photograph taken with the aid of a selfie stick is more akin to one taken with the aid of a tripod, in that the photographer takes much more care with the composition and preparation of the shot.
‘Photographing is essentially an act of non-intervention,’ writes Susan Sontag in her magnificent On Photography (1977), ‘[though] the act of photographing is more than passive observing.’
Sontag is relaying here that while photography necessarily detaches any interaction or meddling with the subject (if recording something as it appears in nature or, for want of any other word ‘reality’), it cannot be seen as just that: recording. In the framing up of any given subject, you lose any claim to objectivity.
I would argue that in holding the camera at arm’s length, with no idea of what the frame is, or what the light is like, or whether you and your mates are even in the damn picture, the ‘fridge shot’ and, to an extent, the original smartphone selfie (before front-facing cameras, introduced to Apple devices with 2010’s iPhone 4 – yep, only five years ago), are more in line with the former definition. This is mainly due to the fact that the artist’s control over the artefact is limited, both physically and in terms of the relinquishing of some of the act to the technology itself.
The ‘distancing’ that comes into play with the selfie stick is an attempt to control the entirety of the act of taking selfies which, in some small way, detracts from the entire philosophy and purpose of the selfie.
Yet another, this time thoroughly thought-out, reason to detest the selfie stick.