The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: Writing

  • All the King’s horses

    Seems about right. Generated with Leonardo.Ai, prompts by me.

    I’ve written previously about the apps I use. When it comes to actual productivity methods, though, I’m usually in one of (what I hope are only) two modes: Complicate Mode (CM) or Simplify Mode (SM).

    CM can be fun because it’s not always about a feeling of overwhelm, or over-complicating things. In its healthier form it might be learning about new modes and methods, discovering new ways I could optimise, satiating my manic monkey brain with lots of shiny new tools, and generally wilfully being in the weeds of it all.

    However CM can also really suck, because it absolutely can feel overwhelming, and it can absolutely feel like I’m lost in the weeds, stuck in the mud, too distracted by the new systems and tools and not actually doing anything. CM can also feel like a plateau, like nothing is working, like the wheels are spinning and I don’t know how to get traction again.

    By contrast, SM usually arrives just after one of these stuck-in-the-mud periods, when I’m just tired and over it. I liken it to a certain point on a long flight. I’m a fairly anxious flyer. Never so much that it’s stopped me travelling, but it’s never an A1 top-tier experience for me. However, on a long-haul flight, usually around 3-5 hours in, it feels like I just ‘run out’ of stress. I know this isn’t what’s actually happening, but it seems like I worked myself up too much, and my body just calms itself enough to be resigned to its situation. And then I’m basically just tired and bored for the remainder of the trip.

    So when I’ve had a period of overwhelm, a period of not getting things done, this usually coincides with CM. I say to myself, “If I can just find the right system, tool, method, app, hack, I’ll get out of this rut.” This is bad CM. Not-healthy CM. Once I’m out of that, though (which, for future self-reference, is never as a result of a Shiny New Thing), I’ll usually slide into SM, when I want to ease out of that mode, take care of myself a bit, be realistic, and strip things back to basics. This is usually not just in terms of productivity/work, but usually extends to overall wellbeing, relationships, creativity, lifestyle, fun: all the non-work stuff, basically.

    The first sign I’m heading into SM is that I’ll unsubscribe from a bunch of app subscriptions (and reading/watching subscriptions too), go back through my bank history to make sure I’m not being charged for anything I’m not into or actively using right now, and note down some simple short-term lifestyle goals (e.g. try to get to the gym in the next few days, meditate every other day, go touch grass or look at a body of water once a week etc). In terms of work, it’s equally simple: try to pick a couple of simple tasks to achieve each day (usually not very brain-heavy) and one large task for the next week/fortnight that I spend a little time on each workday as one of those simple smaller tasks. For instance, I might be working on a journal article; so spending a little time on this during SM might not be writing, per se, but maybe consolidating references, or doing a little reading and note-taking for references I already have but haven’t utilised, or even just a spell-check of what I’ve done so far.

    Phase 1 of SM is usually the above, which I tend to do unconsciously after weeks of stressing myself out and running myself ragged and somehow still doing the essentials of life and work, despite shaving hours, if not days, off my life. Basically, Phase 1 of SM constitutes a bunch of exceptionally good and healthy things to do that I probably should do more regularly to cut off stressful times at the pass; thanks self-preservation brain!

    In terms of strictly productivity, though, SM has previously meant chucking it all in and going back to pen and paper, or chucking in pen and paper and going all in on digital tools (or just one digital tool, which has never worked bro so stop trying it). An even worse thing to do is to go all in on a single new productivity system. This usually takes up a whole day (sometimes two) where I could be either doing shit, or trying to spend quality time figuring out more accurately why shit isn’t getting done, or — probably more to the point — putting everything to one side and giving myself an actual break.

    I’ve had one or two moments of utter desperation, when nothing at all seems like it’s working, when I’ve tried CM and SM and every-other-M to no avail; I’ve even tried taking a bit of a break, but needs must when it comes to somehow just pushing on for whatever reason (personal, financial, professional, psychological, etc). In these moments I’ve had to do a pretty serious and comprehensive life audit. Basically, it’s either whatever note-taking app I see first on my phone, or piece of paper (preferably larger than A4/letter and a bunch of textas, or even just whole bunch of post-it’s and a dream. Make a hot beverage or fill up that water bottle, sit down at desk, dining table, lie in bed or on the floor, and go for it.

    Life Audit Part 1: Commitments and needs/wants

    What are your primary commitments? Your main stressors right now? What are your other stressors? Who are you accountable to/for, or responsible for right now? What do you need to be doing (but actually really need, not just think you need) in only the short-term? What do you want to be doing? What are you paying for right now, obviously financially, but what about physically? Psychologically?

    Life Audit Part 2: Sit Rep

    As it stands right now, how are you answering all the questions from Part 1? Are you kinda lying to yourself about what’s most important? How on earth did you get to the place where you think X is more important than Y? What can you remove from this map to simplify things right now? (Don’t actually remove them, just note down somewhere what you could remove.)

    Life Audit Part 3: Tweak and Adjust

    What tools, systems, methods — if any — do you have in place to cope with any of the foregoing? If you have a method/methods, are they really working? What might you tweak/change/add/remove to streamline or improve this system? If you don’t have any systems right now, what simple approach could you try as a light touch in the coming days or weeks? This could be as simple as blocking out your work time and personal time as work time and personal time, and setting a calendar reminder to try and keep to those times. If you struggle to rest or to give time to important people in your life; why? If your audit is richly developed or super-connected around personal development or lifestyle, or around professional commitments, maybe you need to carve out some time (or not even time, just some headspace) to note down how you can reorient yourself.

    The life audit might be refreshing or energising for some folx, and that’s awesome. For me, though, doing this was taxing. Exhausting. Sometimes debilitating. Maybe doing it more regularly would help, but it really surfaced patterns of thinking and behaviour that had cost me greatly in terms of well-being, welfare, health, time, money, and more besides. So take this as a bit of a disclaimer or warning. It might be good to raise this idea with a loved one or health-type person (GP, psych, religious advisor, etc) before attempting.

    Similarly, maybe a bit of a further disclaimer here. I have read a lot about productivity methods, modes, approaches, gurus, culture, media, and more. I think productivity is something of a myth, and it can also be toxic and dangerous. My personal journey in productivity media and culture has been both a professional interest and a personal interest (at times, obsession). My system probably won’t work for you or anyone really. I’ve learned to tweak, to leave to one side, to adjust and change when needed, and to just drop any pretense of being ‘productive’ if it just ain’t happening.

    Productivity and self-optimisation and their attendant culture are by-products of a capitalist system1. When we buy into it — psychologically, professionally, or financially — we propagate and perpetuate that system, with its prejudices, its injustices, its biases, and its genuine harms. We might kid ourselves that it’s just for us, it’s just the tonic we need to get going, to be a better employee, partner, friend, or whatever; but when it all boils down to it, we’re human. We’re animals. We’re fallible. There are no hacks, there are no shortcuts, and honestly, when it boils down to it, you just have to do the work. And that work is often hard and/or boring and/or time-consuming. I am finally acknowledging and owning this for myself after several years of ignorance. It’s the least any of us can do if we care.


    This post is a line in the sand with my personal journey. To end a chapter. Turn a page. To think through what I’ve tried at various times; to try and give little names and labels to approaches and little recovery methods that I think have been most effective, so that I can just pick them up in future as a little package, a little pill to quickly swallow, rather than inefficiently stumbling my way back to the same solutions via Stress Alley and Burnout Junction.

    Moving forward, I also want to linger a little longer in the last couple of paragraphs. But for real this time. It’s easy to say that I believe in slowing down, in valuing life and whatever it brings me, to just spend time: not doing anything necessarily, but certainly not worrying about whether or not I’m being productive or doing the right thing.

    I want to have a simple system that facilitates my being the kind of employee I want to be; the kind of colleague I want to be; the partner I want to be; the immediate family member (e.g. child, parent, grandchild etc) I want to be; the citizen, human I want to be. This isn’t some lofty ambition talking. I’m realistic about how much space in the world I am taking up: it’s both more than I ever have, but also far from as much as those people (you know who I mean). I want time and space to work on being all of these people, while also — hopefully — making some changes to leave things in a slightly better way than I found them.

    How’s that for a system?

    Notes

    1. For an outstanding breakdown of what I mean by this, please read Melissa Gregg’s excellent monograph Counterproductive: Time Management in the Knowledge Economy. ↩︎
  • This algorithmic moment

    Generated by Leonardo AI; prompts by me.

    So much of what I’m being fed at the moment concerns the recent wave of AI. While we are seeing something of a plateauing of the hype cycle, I think (/hope), it’s still very present as an issue, a question, an opportunity, a hope, a fear, a concept. I’ll resist my usual impulse to historicise this last year or two of innovation within the contexts of AI research, which for decades was popularly mocked and institutionally underfunded; I’ll also resist the even stronger impulse to look at AI within the even broader milieu of technology, history, media, and society, which is, apparently, my actual day job.

    What I’ll do instead is drop the phrase algorithmic moment, which is what I’ve been trying to explore, define, and work through over the last 18 months. I’m heading back to work next week after an extended period of leave, so this seems as good a way of any as getting my head back into some of the research I left to one side for a while.

    The algorithmic moment is what we’re in at the moment. It’s the current AI bubble, hype cycle, growth spurt, whatever you define this wave as (some have dubbed it the AI spring or boom, to distinguish it from various AI winters over the last century1). In trying to bracket it off with concrete times, I’ve settled more or less on the emergence of the GPT-3 Beta in 2020. Of course OpenAI and other AI innovations predated this, but it was GPT-3 and its children ChatGPT and DALL-E 2 that really propelled discussions of AI and its possibilities and challenges into the mainstream.

    This also means that much of this moment is swept up with the COVID pandemic. While online life had bled into the real world in interesting ways pre-2020, it was really that year, during urban lockdowns, family zooms, working from home, and a deeply felt global trauma, that online and off felt one and the same. AI innovators capitalised on the moment, seizing capital (financial and cultural) in order to promise a remote revolution built on AI and its now-shunned sibling in discourse, web3 and NFTs.

    How AI plugs into the web as a system is a further consideration — prior to this current boom, AI datasets in research were often closed. But OpenAI and its contemporaries used the internet itself as their dataset. All of humanity’s knowledge, writing, ideas, artistic output, fears, hopes, dreams, scraped and plugged into an algorithm, to then be analysed, searched, filtered, reworked at will by anyone.

    The downfall of FTX and the trial of Sam Bankman-Fried more or less marked the death knell of NFTs as the Next Big Thing, if not web3 as a broader notion to be deployed across open-source, federated applications. And as NFTs slowly left the tech conversation, as that hype cycle started falling, the AI boom filled the void, such that one can hardly log on to a tech news site or half of the most popular Subs-stack without seeing a diatribe or puff piece (not unlike this very blog post) about the latest development.

    ChatGPT has become a hit productivity tool, as well as a boon to students, authors, copy writers and content creators the world over. AI is a headache for many teachers and academics, many of whom fail not only to grasp its actual power and operations, but also how to usefully and constructively implement the technology in class activities and assessment. DALL-E, Midjourney and the like remain controversial phenomena in art and creative communities, where some hail them as invaluable aids, and others debate their ethics and value.

    As with all previous revolutions, the dust will settle on that of AI. The research and innovation will continue as it always has, but out of the limelight and away from the headlines. It feels currently like we cannot keep up, that it’s all happening too fast, that if only we slowed down and thought about things, we could try and understand how we’ll be impacted, how everything might change. At the risk of historicising, exactly like I said I wouldn’t, people thought the same of the printing press, the aeroplane, and the computer. In 2002, Andrew Murphie and John Potts were trying to capture the flux and flow and tension and release of culture and technology. They were grappling in particular with the widespread adoption of the internet, and how to bring that into line with other systems and theories of community and communication. Jean-Francois Lyotard had said that new communications networks functioned largely on “language games” between machines and humans. Building on this idea, Murphie and Potts suggested that the information economy “needs us to make unexpected ‘moves’ in these games or it will wind down through a kind of natural attrition. [The information economy] feeds on new patterns and in the process sets up a kind of freedom of movement within it in order to gain access to the new.”2

    The information economy has given way, now, to the platform economy. It might be easy, then, to think that the internet is dead and decaying or, at least, kind of withering or atrophying. Similarly, it can be even easier to think that in this locked-down, walled-off, platform- and app-based existence where online and offline are more or less congruent, we are without control. I’ve been dropping breadcrumbs over these last few posts as to how we might resist in some small way, if not to the detriment of the system, then at least to the benefit of our own mental states; and I hope to keep doing this in future posts (and over on Mastodon).

    For me, the above thoughts have been gestating for a long time, but they remain immature, unpolished; unfiltered which, in its own way, is a form of resistance to the popular image of the opaque black box of algorithmic systems. I am still trying to figure out what to do with them; whether to develop them further into a series of academic articles or a monograph, to just keep posting random bits and bobs here on this site, or to seed them into a creative piece, be it a film, book, or something else entirely. Maybe a little of everything, but I’m in no rush.

    As a postscript, I’m also publishing this here to resist another system, that of academic publishing, which is monolithic, glacial, frustrating, and usually hidden behind a paywall for a privileged few. Anyway, I’m not expecting anyone to read this, much less use or cite it in their work, but better it be here if someone needs it than reserved for a privileged few.

    As a bookend for the AI-generated image that opened the post, I asked Bard for “a cool sign-off for my blog posts about technology, history, and culture” and it offered the following, so here you go…

    Signing off before the robots take over. (Just kidding… maybe.)


    Notes

    1. For an excellent history of AI up to around 1990, I can’t recommend enough AI: The Tumultuous History of the Search for Artificial Intelligence by Daniel Crevier. Crevier has made the book available for download via ResearchGate. ↩︎
    2. Murphie, Andrew, and John Potts. 2003. Culture and Technology. London: Macmillan Education UK, p. 208. https://doi.org/10.1007/978-1-137-08938-0. ↩︎
  • Critics and creation

    Photo by Leah Newhouse on Pexels.

    I started reading this interview this morning, between Anne Helen Peterson and Betsy Gaines Quammen. I still haven’t finished reading, despite being utterly fascinated, but even before I got to the guts of the interview, I was struck by a thought:

    In the algorithmised world, the creator is the critic.

    This thought is not necessarily happening in isolation; I’ve been thinking about ‘algorithmic culture’ for a couple of years, trying to order these thoughts into academic writing, or even creative writing. But this thought feels like a step in the right direction, even if I’ve no idea what the final output should or will be. Let’s scribble out some notes…

    If there’s someone whose work we enjoy, they’ll probably have an online presence — a blog or social media feed we can follow — where they’ll share what they like.

    It’s an organic kind of culture — but it’s one where the art and vocation of the critic continues to be minimised.

    This — and associated phenomena — is the subject of a whole bunch of recent and upcoming books (including this one, which is at the top of my to-read pile for the next month): a kind of culture where the all-powerful algorithm becomes the sole arbiter of taste, but I also think there is pressure on creatives to be their own kind of critical and cultural hub.

    On the inverse, what we may traditionally have called critics — so modern-day social media commentators, influencers, your Booktubers or Booktokkers, your video essayists and their ilk — now also feel pressure to create. This pressure will come from their followers and acolytes, but also from random people who encounter them online, who will say something like “if you know so much why don’t you just do it yourself” etc etc…

    Some critics will leap at the opportunity and they absolutely should — we are hearing from diverse voices that wouldn’t otherwise have thought to try.

    But some should leave the creation to others — not because they’re not worth hearing from, they absolutely are — but because their value, their creativity, their strength, lies in how they shape language, images, metaphor, around the work of others. They don’t realise — as I didn’t for a long time — that being a critic is a vocation, a life’s work, a real skill. Look at any longer-form piece in the London Review of Books or The New Inquiry and it becomes very clear how valuable this work is.

    I’ve always loved the term critic, particularly cultural critic, or commentator, or essayist… they always seemed like wonderful archaic terms that don’t belong in the modern, fragmented, divided, confused world. But to call oneself a critic or essayist, to own that, and only that, is to defy the norms of culture; to refuse the ‘pillars’ of novel, film, press/journalism, and to stand to one side, giving much-needed perspective to how these archaic forms define, reflect, and challenge society.

  • Shift Lock #3: A sales pitch for the tepid take

    After ‘abandoning’ the blog part of this site in early 2022, I embarked on a foolish newsletter endeavour called Shift Lock. It was fun and/or sustainable for a handful of posts, but then life got in the way. Over the next little while I’ll re-post those ruminations here for posterity. Errors and omissions my own. This instalment was published May 5, 2022 (see all Shift Lock posts here).


    Photo by Pixabay on Pexels.com

    Twitter was already a corporate entity, and had been struggling with how to market and position itself anyway. Not to mention, its free speech woes — irrevocably tied to those of its competitors — are not surprising. If anything, Mr. Musk was something of a golden ticket: someone to hand everything over to.

    The influx/exodus cycle started before the news was official… Muskovites joined/returned to Twitter in droves, opponents found scrolls bearing ancient Mastodon tutorials and set up their own mini-networks (let’s leave that irony steaming in the corner for now).

    None of this is new: businesses are bought and sold all the time, the right to free speech is never unconditional (and nor should it be), and the general populace move and shift and migrate betwixt different services, platforms, apps, and spaces all the time.

    What seems new, or at least different, about these latter media trends, issues, events, is the sheer volume of coverage they receive. What tends to happen with news from media industries (be they creative, social, or otherwise) is wall-to-wall coverage for a given week or two, before things peter out and we move on to the next block. It seems that online culture operates at two speeds: an instantaneous, rolling, roiling stream of chaos; and a broader, slightly slower rise and fall, where you can actually see trends come and go across a given time period. Taking the Oscars slap as an example: maybe that rise and fall lasts a week. Sometimes it might last two to four, as in the case of Musk and Twitter.

    How, then, do we consider or position these two speeds in broader ‘culture’?

    Like all of the aforementioned, Trump was not a new phenomenon. Populism was a tried and tested political strategy in 2015-16; just, admittedly, a strategy that many of us hoped had faded into obsolescence. However, true to the 20-30 year cycle of such things, Trump emerged. And while his wings were — mostly — clipped by the checks and balances of the over-complex American political system, the real legacy of his reign is our current post-truth moment. And that legacy is exemplified by a classic communications strategy: jamming. Jam the airwaves for a week, so everyone is talking about only one thing. Distract everyone from deeper issues that need work.

    This jamming doesn’t necessary come from politicians, from strategists, from agencies, as it may once have done. Rather, it comes from a conversational consensus emerging from platforms — and this consensus is most likely algorithmically-driven. That’s the real concern. And as much as Musk may want to open up the doors and release the code, it’s really not that straightforward.

    The algorithms behind social media platforms are complex — more than that, they are nested, like a kind of digital Rube Goldberg machine. People working on one section of the code are not aware nor comprehending of what other teams might be working on, beyond any do-not-disturb-type directives from on high. As scholar Nick Seaver says in a recent Washington Post piece, “The people inside Twitter want to understand how their algorithm works, too.” (Albergotti 2022)

    Algorithms — at least those employed by companies like Twitter — are built to stoke the fires of engagement. And there ain’t no gasoline like reactions, like outrage, like whatever the ‘big thing’ is for that particular week. These wildfires also intersect with the broader culture in ways that it takes longer-form criticism (I would say academic scholarship, but we often miss the mark, or more accurately, due to glacial peer review turnarounds, the boat) to meaningfully engage and understand.

    Thanks partly to COVID but also to general mental health stuff, I’ve been on a weird journey with social media (and news, to be fair) over the past 3-5 years. Occasional sabbaticals have certainly helped, but increasingly I’m just not checking it. This year I’ve found more and more writers and commentators whose long-form work I appreciate as a way of keeping across things, but also just for slightly more measured takes. Tepid takes. Not like a spa but more like a heated pool. This is partly why I started this newsletter-based journey, just to let myself think things through in a way that didn’t need to be posted immediately, but nor did I need to wait months/years for peer review. Somewhere beyond even the second trend-based speed I mentioned above.

    What it really lets me do, though, is disengage from the constant flow of algorithmically-driven media, opinion, reaction, and so on, in a way where I can still do that thinking in a relevant and appropriate way. What I’m hoping is that this kind of distance lets me turn around and observe that flow in new and interesting ways.


    Below the divider

    At the end of each post I link a few sites, posts, articles, videos that have piqued my interest of late. Some are connected to my research, some to teaching and other parts of academia, still others are… significantly less so (let’s keep some fun going, shall we?).


    Reed Albergotti (2022, 16 April), ‘Elon Musk wants Twitter’s algorithm to be public. It’s not that simple.’ Washington Post.

  • Shift Lock #2: Numbers and nodes

    After ‘abandoning’ the blog part of this site in early 2022, I embarked on a foolish newsletter endeavour called Shift Lock. It was fun and/or sustainable for a handful of posts, but then life got in the way. Over the next little while I’ll re-post those ruminations here for posterity. Errors and omissions my own. This instalment was published April 1, 2022 (see all Shift Lock posts here).


    To take a uniquely Web 2.0 perspective, one might say that ‘there is no longer such thing as a passive audience.’ It is undoubtedly true that new tools, technologies, and modes of communication have made it relatively straightforward to communicate one-to-one or among one’s networks. The result is a kind of town square both ad infinitum and nauseum, where memes and weekend warriors abound, a post-truth, “postpolitical cornucopia” where we all “fish, film, fuck, frolic, and fund from morning to midnight” (Miller 2009) In the social media age (Miller’s polite rage at user-generated content seems delightfully quaint now, in a ‘oooh, the teacher said fuck!’ kind of way), it can feel like we’re drowning in immediate reaction, and reactive opinion. In the immediate aftermath of the Will Smith slap incident at the 2022 Oscars, Ryan Broderick called it “viral pre-exhaustion”, the dread that the latest trending issue or moment will saturate feeds and streams and columns for days to come.

    I used to even watch award shows or televised live events hoping for this kind of thing to happen. But now, the very thought of having the same “have you seen X meme or Y take” conversation, which now happens both online and off, feels completely draining. (Broderick 2022)

    Saturation and a feeling of existential dread linked to said saturation is not a product of COVID, but the pandemi-moore certainly hasn’t helped. The distance between home and work, or study, or restaurants or, you know, outside, and the resultant necessary movement, meant that there was at least some forced breaks between the mindless absorption of hot takes. While stuck at home, that boundary, between brain and reactive opinion, between independent, critical thought and the feed, broke down as easily as that between work and life.

    If global internet usage increased by a whopping 40% as a result of the pandemic (Sandvine Inc. 2020) some of that at least has to be users who specifically joined some kind of social network to rage about X or Y pandemic trending topic. Or perhaps they were already raging, and the panini simply allowed them more time and justification and reasons to do so.

    It’s easy to look back and say times were simpler. Some have built careers out of it. And, sure, some of the diagrams we had when I first studied audiences were lovely.1

    Karl Bühler’s Organon Model of human communication, 1934.

    There has always, however, been a private and public sphere. It’s been a long time since I read my Habermas, but the notion of the latter sphere solidified around some kind of arena where debates could be had, grievances aired, authority ridiculed, speech could be free. The concept, at least according to Habermas, emerged after the Renaissance, with the opening up of global trade passages and an increased interest in ideas, creativity, and independent thought.6 What fascinated me most as a rookie media scholar was that I was seeing these 40+ year old ideas playing out live in — get ready for a flashback — the blogosphere. This was the pre-social media height of public and independent discourse, where anyone could publish whatever they wanted to their Livejournal, Blogger or WordPress, and the comments section was where the real conversation kicked off — believe it or not, they used to be rather civil.

    Habermas was also partly responsible for my hybrid interests of media and film, in part because he suggested that it was in media that much of these deliberations, debates, grievances, could be encoded. While I read this, of course I was blogging about films, TV shows, and chatting about them in my uni classes: my own little filter sphere, of course, but a neat micro-example of Habermas’ thinking.

    Over a decade later, and looking back over the evolution3 of internet technology and screen-based cultures, the public sphere seems at first glance to have evolved into a chaotic mess of bad takes and half-baked thinkpieces. The usual culprits cajole and dominate their target demographics, and the filter bubbles seem to close around everyone to an isolation-fuelled zenith. Social media is fragmenting into similar bubbles — e.g. monolithic Facebook/Twitter into Parler, Telegram, etc. — with little interest in public-facing discourse, and more in a kind of gated echo chamber where fringe ideas aren’t actively encouraged, but they certainly aren’t grounds for expulsion.

    The mechanics of Web 2.0 still exist as we shift to web3, web2S 3D, or whatever comes next. It’s still very straightforward to set up some kind of public site for oneself and spout whatever nonsense you like (welcome to Shift Lock). But the unfortunate combination of the web of commerce/apps and the post-truth era means a siloing off: a splicing of the spheres.

    So where, what, who is ‘the audience’? Is it still possible to think of a ‘public’ as a homogenous entity in the era of the platform? Ida Willig tracks this shift within media agencies, and the move from scatter-shot TV and print campaigns to tracked and targeted exposures based on behaviours. As they write:

    When the media agency executive … speaks about ‘behaviour’, it is of course not our offline life he is referring to, nor is it any person in the sense of an identifiable human being, but the activity of a given IP address. This is a fundamental shift in how media agencies think about and work with consumers, and not least a fundamental shift in the knowledge that lies behind the construction of different target groups. (Willig 2022)

    Despite the best efforts of corporations over the last century to assure us that ‘we’re not a number’, turns out we are after all. It makes things so much easier. In the past, salespeople would spin out an ad with no concrete idea of number of exposures or conversions to sale. Willig uses the example of a car:

    With digital media, media agencies can sell ad space directed at people who are in the market for a car, or even a car of that specific brand, and track their exact online behaviour from interest to final buy. (Willig 2022)

    For academics, particularly of the humanities stripe like myself, this is tricky. We’ve done our best to shun spectatorship, and the figure of the singular ‘audience’ is pretty much totally poo-pooed now in cinema studies (that took some work). But even if we shift the conversation in textual analysis to potential interpretations, we’re still treating the audience as a known unknown, or worse still, simply hiding ourselves and our own interpretations.

    The subject of surveillance capitalism is treated as an individual with its own desires, needs, modes of engagement and routines. This sounds like progress until you remember that this system only cares about individuation so long as it makes you buy stuff.

    For media-makers, this is a problem, too — the majority are interested in getting as many people to watch, read, listen to, play, or engage with their creation as possible. Individuated, niche segments, tiny custom campaigns direct a handful of IP addresses in predictable ways. In creating a perfect system for advertising, we have destroyed many concepts, spaces, that could be viewed as a public sphere in the Habermasian sense. Perhaps there never was a monolithic mass media audience in this way, but it was helpful to have that in mind when thinking through how media works.

    Photo by Pixabay from Pexels: https://www.pexels.com/photo/close-up-photography-of-yellow-green-red-and-brown-plastic-cones-on-white-lined-surface-163064/

    So where does the public conversation play out? Instagram stories? TikTok? Whatever is trending on Twitter? Films and TV? Sure, in part. The public sphere is not just one thing, and that’s the point. It’s probably best to think of it in terms of the notion of media landscape discussed previously: a web or mesh of technologies, platforms, tools, companies and individuals, sending, receiving, storing. Add to that mesh several little silos or bubbles that have minimal connection to others, and some bubbles that encompass enormous sweeps of three-dimensional space. Conceivably, we can map The Conversation4 according to the number and frequency of connections between nodes in the mesh, drawing out themes and big issues accordingly.

    This is what algorithms are built to do: they map the mesh and find the best routes to take. What they carry along those routes might be commerce-driven or content-driven, but the goal is still to get it in front of a node (person, feed, platform, screen) who’ll use it. Algorithms are the new media agencies; the more things change, etc etc.


    Below the divider

    At the end of each post I’ll try to link a few sites, posts, articles, videos that have piqued my interest of late. Some will be connected to my research, some to teaching and other parts of academia, still others will be… significantly less so (let’s keep some fun going, shall we?).


    References

    Broderick, Ryan. ‘It’s just Oscars takes all the way down.’ Garbage Day, 29 March 2022.

    Miller, Toby. “Media Studies 3.0.” Television & New Media, vol. 10, no. 1, SAGE Publications, 2009, 5–6, 6.

    Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Great Brit: Polity Press, 1989, 17-18.

    Sandvine Inc., Global Internet Phenomena Report: COVID-19 Spotlight, May 2020, Waterloo, Canada, 5.

    Willig, Ida. “From Audiences to Data Points: The Role of Media Agencies in the Platformization of the News Media Industry.” Media, Culture & Society 44, no. 1 (January 2022): 56–71, 63-4.


    Notes

    1 illustration from Lanigan, Richard L. 2013 ‘Information theories’ in Paul Cobley and Peter J. Schulz (eds.),Theories and Models of Communication, Berlin: De Gruyter, Inc., pp. 59-83, p. 65.

    2 I knew I was lost to media theory/academia when I actually found his Structural Transformation (see Habermas 1989) interesting as a second-year.

    3 Yes, despite overwhelming evidence to the contrary, I still believe the internet is an evolution, thanks in part to Hank Green.

    4 As in The Conversation™ aka The Discourse, not to be confused with the academically-inflected publication of the same name.