The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: media

  • Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings

    AI-generated with Leonardo Phoenix 1.0. Author supplied

    Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content that feels a little too real.

    The markers of AI-generated media are becoming harder to spot as technology companies work to iron out the kinks in their generative artificial intelligence (AI) models.

    But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.

    When AI hallucinates, contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.

    In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.

    Welcome to the Slopocene

    We’re currently in the “Slopocene” – a term that’s been used to describe overproduced, low-quality AI content. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.

    AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues large language models (LLMs) hallucinate all the time, and it’s only when they

    go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

    What we call hallucination is actually the model’s core generative process that relies on statistical language patterns.

    In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.

    This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.

    Pushing a chatbot to its limits

    If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?

    With this in mind, I decided to “break” Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.

    The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.

    A language model in collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding its usual guardrails and running until the system cut it off. Screenshot by author.

    Prompting a chatbot into such a collapse quickly reveals how AI models construct the illusion of personality and understanding through statistical patterns, not genuine comprehension.

    Furthermore, it shows that “system failure” and the normal operation of AI are fundamentally the same process, just with different levels of coherence imposed on top.

    ‘Rewilding’ AI media

    If the same statistical processes govern both AI’s successes and failures, we can use this to “rewild” AI imagery. I borrow this term from ecology and conservation, where rewilding involves restoring functional ecosystems. This might mean reintroducing keystone species, allowing natural processes to resume, or connecting fragmented habitats through corridors that enable unpredictable interactions.

    Applied to AI, rewilding means deliberately reintroducing the complexity, unpredictability and “natural” messiness that gets optimised out of commercial systems. Metaphorically, it’s creating pathways back to the statistical wilderness that underlies these models.

    Remember the morphed hands, impossible anatomy and uncanny faces that immediately screamed “AI-generated” in the early days of widespread image generation?

    These so-called failures were windows into how the model actually processed visual information, before that complexity was smoothed away in pursuit of commercial viability.

    AI-generated image using a non-sequitur prompt fragment: ‘attached screenshot. It’s urgent that I see your project to assess’. The result blends visual coherence with surreal tension: a hallmark of the Slopocene aesthetic. AI-generated with Leonardo Phoenix 1.0, prompt fragment by author.

    You can try AI rewilding yourself with any online image generator.

    Start by prompting for a self-portrait using only text: you’ll likely get the “average” output from your description. Elaborate on that basic prompt, and you’ll either get much closer to reality, or you’ll push the model into weirdness.

    Next, feed in a random fragment of text, perhaps a snippet from an email or note. What does the output try to show? What words has it latched onto? Finally, try symbols only: punctuation, ASCII, unicode. What does the model hallucinate into view?

    The output – weird, uncanny, perhaps surreal – can help reveal the hidden associations between text and visuals that are embedded within the models.

    Insight through misuse

    Creative AI misuse offers three concrete benefits.

    First, it reveals bias and limitations in ways normal usage masks: you can uncover what a model “sees” when it can’t rely on conventional logic.

    Second, it teaches us about AI decision-making by forcing models to show their work when they’re confused.

    Third, it builds critical AI literacy by demystifying these systems through hands-on experimentation. Critical AI literacy provides methods for diagnostic experimentation, such as testing – and often misusing – AI to understand its statistical patterns and decision-making processes.

    These skills become more urgent as AI systems grow more sophisticated and ubiquitous. They’re being integrated in everything from search to social media to creative software.

    When someone generates an image, writes with AI assistance or relies on algorithmic recommendations, they’re entering a collaborative relationship with a system that has particular biases, capabilities and blind spots.

    Rather than mindlessly adopting or reflexively rejecting these tools, we can develop critical AI literacy by exploring the Slopocene and witnessing what happens when AI tools “break”.

    This isn’t about becoming more efficient AI users. It’s about maintaining agency in relationships with systems designed to be persuasive, predictive and opaque.


    This article was originally published on The Conversation on 1 July, 2025. Read the article here.

  • Unknown Song By…

    A USB flash drive on a wooden surface.

    A week or two ago I went to help my Mum downsize before she moves house. As with any move, there was a lot of accumulated ‘stuff’ to go through; of course, this isn’t just manual labour of sorting and moving and removing, but also all the associated historical, emotional, material, psychological labour to go along with it. Plenty of old heirlooms and photos and treasures, but also a ton of junk.

    While the trip out there was partly to help out, it was also to claim anything I wanted, lest it accidentally end up passed off or chucked away. I ended up ‘inheriting’ a few bits and bobs, not least of which an old PC, which may necessitate a follow-up to my tinkering earlier this year.

    Among the treasures I claimed was an innocuous-looking black and red USB stick. On opening up the drive, I was presented with a bunch of folders, clearly some kind of music collection.

    While some — ‘Come Back Again’ and ‘Time Life Presents…’ — were obviously albums, others were filled with hundreds of files. Some sort of library/catalogue, perhaps. Most intriguing, though, not to mention intimidating, was that many of these files had no discernible name or metadata. Like zero. Blank. You’ve got a number for a title, duration, mono/stereo, and a sample rate. Most are MP3s, there are a handful of WAVs.

    Cross-checking dates and listening to a few of the mystery files, Mum and I figured out that this USB belonged to a late family friend. This friend worked for much of his life in radio; this USB was the ‘core’ of his library, presumably that he would take from station to station as he moved about the country.

    Like most media, music happens primarily online now, on platforms. For folx of my generation and older, it doesn’t seem that long ago that music was all physical, on cassettes, vinyl, CDs. But then, seemingly all of a sudden, music happened on the computer. We ripped all our CDs to burn our own, or to put them on an MP3 player or iPod, or to build up our libraries. We downloaded songs off LimeWire or KaZaA, then later torrented albums or even entire discographies.

    With physical media, the packaging is the metadata. Titles, track listings, personnel/crew, descriptions and durations adorn jewel cases, DVD covers, liner notes, and so on. Being thrust online as we were, we relied partly on the goodwill and labour of others — be they record labels or generous enthusiasts — to have entered metadata for CDs. On the not infrequent occasion where we encountered a CD without this info, we had to enter it ourselves.

    Wake up and smell the pixels. (source)

    This process ensured that you could look at the little screen on your MP3 player or iPod and see what the song was. If you were particularly fussy about such things (definitely not me) you would download album art to include, too; if you couldn’t find the album art, it’d be a picture of the artist, or of something else that represented the music to you.

    This labour set up a relationship between the music listener and their library; between the user and the file. The ways that software like iTunes or Winamp or Media Player would catalogue or sort your files (or not), and how your music would be presented in the interface; these things changed your relationship to your music.

    Despite the incredible privilege and access that apps like Spotify, Apple Music, Tidal, and the like, offer, we have these things at the expense of this user-file-library relationship. I’m not placing a judgement on this, necessarily, just noting how things have changed. Users and listeners will always find meaningful ways to engage with their media: the proliferation of hyper-specific playlists for each different mood or time of day or activity is an example of this. But what do we lose when we no longer control the metadata?

    On that USB I found, there are over 3500 music files. From a quick glance, I’d say about 75% have some kind of metadata attached, even if it’s just the artist and song title in the filename. Many of the rest, we know for certain, were directly digitised from vinyl, compact cassette, or spooled tape (for a reel-to-reel player). There is no automatic database search for these files. Dipping in and out, it will likely take me months to listen to the songs, note down enough lyrics for a search, then try to pin down which artist/version/album/recording I’m hearing. Many of these probably won’t exist on apps like Spotify, or even in dingy corners of YouTube.

    A detective mystery, for sure, but also a journey through music and media history: and one I’m very much looking forward to.

  • Elusive images

    Generated with Leonardo.Ai, prompts by me.

    Up until this year, AI-generated video was something of a white whale for tech developers. Early experiments resulted in janky-looking acid dream GIFs; vaguely recognisable frames and figures, but nothing in terms of consistent, logical motion. Then things started to get a little, or rather a lot, better. Through constant experimentation and development, the nerds (and I use this term in a nice way) managed to get the machines (and I use this term in a knowingly reductive way) to produce little videos that could have been clips from a film or a human-made animation. To reduce thousands of hours of math and programming into a pithy quotable, the key was this: they encoded time.

    RunwayML and Leonardo.Ai are probably the current forerunners in the space, allowing text-to-image-to-(short)video as a seamless user-driven process. RunwayML also offers text-to-audio generation, which you can then use to generate an animated avatar speaking those words; this avatar can be yourself, another real human, a generated image, or something else entirely. There’s also Pika, Genmo and many others offering variations on this theme.

    Earlier this year, OpenAI announced Sora, their video generation tool. One assumes this will be built into ChatGPT, the chatbot which is serving as the interface for other OpenAI products like DALL-E and custom GPTs. The published results of Sora are pretty staggering, though it’s an open secret that these samples were chosen from many not-so-great results. Critics have also noted that even the supposed exemplars have their flaws. Similar things were said about image generators only a few years ago, though, so one assumes that the current state of things is the worst it will ever be.

    Creators are now experimenting with AI films. The aforementioned RunwayML is currently running their second AI Film Festival in New York. Many AI films are little better than abstract pieces that lack the dynamism and consideration to be called even avant-garde. However, there are a handful that manage to transcend their technical origins. But how this is not true of all media, all art, manages to elude critics and commentators, and worst of all, my fellow scholars.

    It is currently possible, of course, to use AI tools to generate most components, and even to compile found footage into a complete video. But this is an unreliable method that offers little of the creative control that filmmakers might wish for. Creators employ an infinite variety of different tools, workflows, and methods. The simplest might prompt ChatGPT with an idea, ask for a fleshed-out treatment, and then use other tools to generate or source audiovisual material that the user then edits in software like Resolve, Final Cut or Premiere. Others build on this post-production workflow by generating music with Suno or Udio; or they might compose music themselves and have it played by an AI band or orchestra.

    As with everything, though, the tools don’t matter. If the finished product doesn’t have a coherent narrative, theme, or idea, it remains a muddle of modes and outputs that offers nothing to the viewer. ChatGPT may generate some poetic ideas on a theme for you, but you still have to do the cognitive work of fleshing that out, sourcing your media, arranging that media (or guiding a tool to do it for you). Depending on what you cede to the machine, you may or may not be happy with the result — cue more refining, revisiting, more processing, more thinking.

    AI can probably replace us humans for low-stakes media-making, sure. Copywriting, social media ads and posts, the nebulous corporate guff that comprises most of the dead internet. For AI video, the missing component of the formula was time. But for AI film, time-based AI media of any meaning or consequence, encoding time was just the beginning.

    AI media won’t last as a genre or format. Call that wild speculation if you like, but I’m pretty confident in stating it. AI media isn’t a fad, though, I think, in the same ways that blockchain and NFTs were. AI media is showing itself to be a capable content creator and creative collaborator; events like the AI Film Festival are how these tools test and prove themselves in this regard. To choose a handy analogue, the original ‘film’ — celluloid exposed to light to capture an image — still exists. But that format is distinct from film as a form. It’s distinct from film as a cultural idea. From film as a meme or filter. Film, somehow, remains a complex cultural assemblage of technical, social, material and cultural phenomena. Following that historical logic, I don’t think AI media will last in its current technical or cultural form. That’s not to say we shouldn’t be on it right now: quite the opposite, in fact. But to do that, don’t look to the past, or to textbooks, or even to people like me, to be honest. Look to the true creators: the tinkerers, the experimenters, what Apple might once have called the crazy ones.

    Creators and artists have always pushed the boundaries, have always guessed at what matters and what doesn’t, have always shared those guesses with the rest of us. Invariably, those guesses miss some of the mark, but taken collectively they give a good sense of a probable direction. That instinct to take wild stabs is something that LLMs, even a General Artificial Intelligence, will never be truly capable of. Similarly, the complexity of something like, for instance, a novel, or a feature film, eludes these technologies. The ways the tools become embedded, the ways the tools are treated or rejected, the ways they become social or cultural; that’s not for AI tools to do. That’s on us. Anyway, right now AI media is obsessed with its own nature and role in the world; it’s little better than a sequel to 2001: A Space Odyssey or Her. But like those films and countless other media objects, it does itself show us some of the ways we might either lean in to the change, or purposefully resist it. Any thoughts here on your own uses are very welcome!

    The creative and scientific methods blend in a fascinating way with AI media. Developers build tools that do a handful of things; users then learn to daisy-chain those tools together in personal workflows that suit their ideas and processes. To be truly innovative, creators will develop bold and strong original ideas (themes, stories, experiences), and then leverage their workflows to produce those ideas. It’s not just AI media. It’s AI media folded into everything else we already do, use, produce. That’s where the rubber meets the road, so to speak; where a tool or technique becomes the culture. That’s how it worked with printing and publishing, cinema and TV, computers, the internet, and that’s how it will work with AI. That’s where we’re headed. It’s not the singularity. It’s not the end of the world. it’s far more boring and fascinating than either of those could ever hope to be.

  • The roaring 2020s and their spectacular picture palaces

    Blank screen, auditorium to yourself, can’t lose. Photo by me, 18 April 2024.

    I took myself off to the movies lastnight. First time since 1917. The Sam Mendes film I mean, uh, obviously.

    Having gone on my little Godzilla binge earlier in the year, I thought it fitting that I take myself out to the latest instalment. The film itself was fine. Good loud dumb fun. Exactly the same formula as the others. A great soundtrack. Rebecca Hall being her wonderful earnest self. Dan Stevens being… whatever he is now (though he’ll always be Matthew to me). Content to one side, though, it was just great to be in the cinema again. For someone who allegedly studies the stuff from time to time, I don’t watch as much as I’d like; and I certainly don’t go to the cinema often at all. Lastnight showed me I probably should change that.

    I’ve often ruminated, in text and in brain, about the changing media landscape. I’m far from the only one, and recently Paris Marx put up a post about his quest to find Dune: Part One on home media. This story resonated with me. I have a sizeable physical media collection; it’s a dear asset and hobby, and one I am constantly surprised is not even close to mainstream nowadays.

    The production of physical has shifted considerably as demand has waned in the streaming era. DVDs are still, somehow, fairly popular; mostly due to an ageing and/or non-discerning audience (though that last bastion of DVDs, the supermarket DVD section, seems to have died off, finally). Blurays maintain a fair market share, but still require specialist hardware and are region-locked. Despite 4K Blurays being region-free and, with even a semi-decent TV, utterly gorgeous, they hold next to nothing of the market, being really only targeted at huge AV nerds like me.

    During COVID, the streaming platforms cemented their place in the homes and lives of everyone. I am certainly no exception to this. It was insanely convenient to have pretty much the world’s media output at the touch of a button. It was a good time: subscription prices were still relatively low, and the catalogues were decent enough to be worth having more than one or two services on the Apple TV at any given time.

    Netflix, Stan (an Aussie streaming service), and Prime Video were staples. They were also producing their own content, so in a way, they were modelling themselves on the megalithic studios of yore — as producers, distributors, marketers, even as archivists of popular culture.

    Things change, of course. They always do.

    Post-COVID, catalogues were culled. Most streaming services were operating, if not at a loss, then at least just breaking even with the equation of producing original content and/or buying distribution rights to older properties, or just other stuff in general.

    Then the original producers (in some cases the original studios) figured out they could just do it themselves. Disney+, Paramount+, Sony Core (aka Columbia); their own catalogues, their own archives, their own films straight from the cinema deal to the home media deal with no pesky negotiation.

    Prices for all streaming services have steadily risen over the last few years. For your average household, hell, even your above-average household, having all subscriptions active at one time simply isn’t feasible. It’s usually a question of who’s got what content at what time; or employ our house’s strategy and binge one or two platforms in one- or two-monthly bursts.

    Finding something specific in a given streaming catalogue is not a given. So you either pay Apple or Google or whoever to rent for a day or two or a week or whatever; or you pay them to ‘lease’ a copy of the film for you to view on-demand (they call this ‘buying’ the film). If giving money to the megacorps isn’t what you had in mind, maybe your brain would turn to the possibility of buying a physical copy of said media item for yourself.

    So you load up a web browser and punch in your best local media retailer. In my case, it’s a loud yellow behemoth called JB Hi-Fi; for more obscure titles or editions, it’d be something like Play DVD. These places are thin on the ground and, increasingly, even thin in the cloud. But JB’s physical media collection is dwindling, and has been for years. Their DVD/Bluray shelves used to occupy half of their huge stores; now they have maybe half a dozen tucked down the back, with the old real estate now occupied by more drones, more influencer starter kits, more appliances or pop culture paraphernalia.

    It struck me lastnight, as I headed into the cinema, that perhaps the film experience could see a bit of a bump if streaming services continue to fracture, and if physical media stock continues to disappear. If it’s a specific film that you want to see, and you know it’s on at the cinema, it’s probably more efficient overall to go and see it then and there. There are no guarantees any given film will be put up on a given streaming platform, nor that it will even get a physical media release any more. And if it does appear in either form, what quality will it be in? Would the experience be somehow diminished?

    There’s also something to be said for the sheer ubiquity and disposability of media in our current moment, particularly within the home, or home-based clouds. If I spot something on Netflix, I’ll add it to my List. I may watch it, but 7 times out of 10, I’ll forget it existed; once Netflix changes their catalogue, that item just floats away. I’m not notified; I’m not warned; unless it’s something on my watchlist on Letterboxd, or in a note or something, it just vanishes into the ether. Similarly with home media; if there’s a sale on at JB for Blurays, I might pick up a couple. They’ll then go on the shelf with the many, many others, and it might take years until I eventually get to it.

    There’s an intentionality to seeing a film at the cinema. In general, people are there to be absorbed in a singular experience. To not be distracted. To actually switch off from the outside world. I don’t claim any media superiority; I am a big old tech bro through and through, but there is something to the, ahem, CINEMATIC EXPERIENCE that really does retain the slightest touch of magic.

    So yes, perhaps we will see a little hike in moviegoing, if the platform economy continues to expand, explode, consume. Either that, or torrents will once again be IN for 2025 and beyond. Switch on that VPN and place your bets.

  • All the King’s horses

    Seems about right. Generated with Leonardo.Ai, prompts by me.

    I’ve written previously about the apps I use. When it comes to actual productivity methods, though, I’m usually in one of (what I hope are only) two modes: Complicate Mode (CM) or Simplify Mode (SM).

    CM can be fun because it’s not always about a feeling of overwhelm, or over-complicating things. In its healthier form it might be learning about new modes and methods, discovering new ways I could optimise, satiating my manic monkey brain with lots of shiny new tools, and generally wilfully being in the weeds of it all.

    However CM can also really suck, because it absolutely can feel overwhelming, and it can absolutely feel like I’m lost in the weeds, stuck in the mud, too distracted by the new systems and tools and not actually doing anything. CM can also feel like a plateau, like nothing is working, like the wheels are spinning and I don’t know how to get traction again.

    By contrast, SM usually arrives just after one of these stuck-in-the-mud periods, when I’m just tired and over it. I liken it to a certain point on a long flight. I’m a fairly anxious flyer. Never so much that it’s stopped me travelling, but it’s never an A1 top-tier experience for me. However, on a long-haul flight, usually around 3-5 hours in, it feels like I just ‘run out’ of stress. I know this isn’t what’s actually happening, but it seems like I worked myself up too much, and my body just calms itself enough to be resigned to its situation. And then I’m basically just tired and bored for the remainder of the trip.

    So when I’ve had a period of overwhelm, a period of not getting things done, this usually coincides with CM. I say to myself, “If I can just find the right system, tool, method, app, hack, I’ll get out of this rut.” This is bad CM. Not-healthy CM. Once I’m out of that, though (which, for future self-reference, is never as a result of a Shiny New Thing), I’ll usually slide into SM, when I want to ease out of that mode, take care of myself a bit, be realistic, and strip things back to basics. This is usually not just in terms of productivity/work, but usually extends to overall wellbeing, relationships, creativity, lifestyle, fun: all the non-work stuff, basically.

    The first sign I’m heading into SM is that I’ll unsubscribe from a bunch of app subscriptions (and reading/watching subscriptions too), go back through my bank history to make sure I’m not being charged for anything I’m not into or actively using right now, and note down some simple short-term lifestyle goals (e.g. try to get to the gym in the next few days, meditate every other day, go touch grass or look at a body of water once a week etc). In terms of work, it’s equally simple: try to pick a couple of simple tasks to achieve each day (usually not very brain-heavy) and one large task for the next week/fortnight that I spend a little time on each workday as one of those simple smaller tasks. For instance, I might be working on a journal article; so spending a little time on this during SM might not be writing, per se, but maybe consolidating references, or doing a little reading and note-taking for references I already have but haven’t utilised, or even just a spell-check of what I’ve done so far.

    Phase 1 of SM is usually the above, which I tend to do unconsciously after weeks of stressing myself out and running myself ragged and somehow still doing the essentials of life and work, despite shaving hours, if not days, off my life. Basically, Phase 1 of SM constitutes a bunch of exceptionally good and healthy things to do that I probably should do more regularly to cut off stressful times at the pass; thanks self-preservation brain!

    In terms of strictly productivity, though, SM has previously meant chucking it all in and going back to pen and paper, or chucking in pen and paper and going all in on digital tools (or just one digital tool, which has never worked bro so stop trying it). An even worse thing to do is to go all in on a single new productivity system. This usually takes up a whole day (sometimes two) where I could be either doing shit, or trying to spend quality time figuring out more accurately why shit isn’t getting done, or — probably more to the point — putting everything to one side and giving myself an actual break.

    I’ve had one or two moments of utter desperation, when nothing at all seems like it’s working, when I’ve tried CM and SM and every-other-M to no avail; I’ve even tried taking a bit of a break, but needs must when it comes to somehow just pushing on for whatever reason (personal, financial, professional, psychological, etc). In these moments I’ve had to do a pretty serious and comprehensive life audit. Basically, it’s either whatever note-taking app I see first on my phone, or piece of paper (preferably larger than A4/letter and a bunch of textas, or even just whole bunch of post-it’s and a dream. Make a hot beverage or fill up that water bottle, sit down at desk, dining table, lie in bed or on the floor, and go for it.

    Life Audit Part 1: Commitments and needs/wants

    What are your primary commitments? Your main stressors right now? What are your other stressors? Who are you accountable to/for, or responsible for right now? What do you need to be doing (but actually really need, not just think you need) in only the short-term? What do you want to be doing? What are you paying for right now, obviously financially, but what about physically? Psychologically?

    Life Audit Part 2: Sit Rep

    As it stands right now, how are you answering all the questions from Part 1? Are you kinda lying to yourself about what’s most important? How on earth did you get to the place where you think X is more important than Y? What can you remove from this map to simplify things right now? (Don’t actually remove them, just note down somewhere what you could remove.)

    Life Audit Part 3: Tweak and Adjust

    What tools, systems, methods — if any — do you have in place to cope with any of the foregoing? If you have a method/methods, are they really working? What might you tweak/change/add/remove to streamline or improve this system? If you don’t have any systems right now, what simple approach could you try as a light touch in the coming days or weeks? This could be as simple as blocking out your work time and personal time as work time and personal time, and setting a calendar reminder to try and keep to those times. If you struggle to rest or to give time to important people in your life; why? If your audit is richly developed or super-connected around personal development or lifestyle, or around professional commitments, maybe you need to carve out some time (or not even time, just some headspace) to note down how you can reorient yourself.

    The life audit might be refreshing or energising for some folx, and that’s awesome. For me, though, doing this was taxing. Exhausting. Sometimes debilitating. Maybe doing it more regularly would help, but it really surfaced patterns of thinking and behaviour that had cost me greatly in terms of well-being, welfare, health, time, money, and more besides. So take this as a bit of a disclaimer or warning. It might be good to raise this idea with a loved one or health-type person (GP, psych, religious advisor, etc) before attempting.

    Similarly, maybe a bit of a further disclaimer here. I have read a lot about productivity methods, modes, approaches, gurus, culture, media, and more. I think productivity is something of a myth, and it can also be toxic and dangerous. My personal journey in productivity media and culture has been both a professional interest and a personal interest (at times, obsession). My system probably won’t work for you or anyone really. I’ve learned to tweak, to leave to one side, to adjust and change when needed, and to just drop any pretense of being ‘productive’ if it just ain’t happening.

    Productivity and self-optimisation and their attendant culture are by-products of a capitalist system1. When we buy into it — psychologically, professionally, or financially — we propagate and perpetuate that system, with its prejudices, its injustices, its biases, and its genuine harms. We might kid ourselves that it’s just for us, it’s just the tonic we need to get going, to be a better employee, partner, friend, or whatever; but when it all boils down to it, we’re human. We’re animals. We’re fallible. There are no hacks, there are no shortcuts, and honestly, when it boils down to it, you just have to do the work. And that work is often hard and/or boring and/or time-consuming. I am finally acknowledging and owning this for myself after several years of ignorance. It’s the least any of us can do if we care.


    This post is a line in the sand with my personal journey. To end a chapter. Turn a page. To think through what I’ve tried at various times; to try and give little names and labels to approaches and little recovery methods that I think have been most effective, so that I can just pick them up in future as a little package, a little pill to quickly swallow, rather than inefficiently stumbling my way back to the same solutions via Stress Alley and Burnout Junction.

    Moving forward, I also want to linger a little longer in the last couple of paragraphs. But for real this time. It’s easy to say that I believe in slowing down, in valuing life and whatever it brings me, to just spend time: not doing anything necessarily, but certainly not worrying about whether or not I’m being productive or doing the right thing.

    I want to have a simple system that facilitates my being the kind of employee I want to be; the kind of colleague I want to be; the partner I want to be; the immediate family member (e.g. child, parent, grandchild etc) I want to be; the citizen, human I want to be. This isn’t some lofty ambition talking. I’m realistic about how much space in the world I am taking up: it’s both more than I ever have, but also far from as much as those people (you know who I mean). I want time and space to work on being all of these people, while also — hopefully — making some changes to leave things in a slightly better way than I found them.

    How’s that for a system?

    Notes

    1. For an outstanding breakdown of what I mean by this, please read Melissa Gregg’s excellent monograph Counterproductive: Time Management in the Knowledge Economy. ↩︎