The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Category: Music

  • A Little Slop Music

    The AI experiment that turned my ick to 11 (now you can try it too!)

    When I sit at the piano I’m struck by a simple paradox: twelve repeating keys are both trivial and limitless. The layout is simple; mastery is not. A single key sets off a chain — lever, hammer, string, soundboard. The keyboard is the interface that controls an intricate deeper mechanism.

    The computer keyboard can be just as musical. You can sequence loops, dial patches, sample and resample, fold fragments into new textures, or plug an instrument in and hear it transformed a thousand ways. It’s a different kind of craft, but it’s still craft.

    Generative AI has given me more “magic” moments than any other technology I’ve tried: times when the interface fell away and something like intelligence answered my inputs. Images, text, sounds appearing that felt oddly new: the assemblage transcending its parts. Still, my critical brain knows it’s pattern-play: signal in noise.

    AI-generated music feels different, though.

    ‘Blåtimen’, by Lars Vintersholm & Triple L, from the album Just North of Midnight.

    In exploring AI, music, and ethics after the Velvet Sundown fallout, a colleague tasked students with building fictional bands: LLMs for lyrics and backstory, image and video generators for faces and promo, Suno for the music. Some students leaned into the paratexts; the musically inclined pulled stems apart and remixed them.

    Inspired, I tried it myself. And, wouldn’t you know, the experience produced a pile of Thoughts. And not insignificantly, a handful of Feelings.

    Lars Vintershelm, captured for a feature article in Scena Norge, 22 August 2025.

    Ritual-Technic: Conjuring a Fictional AI Band

    1. Start with the sound

    • Start with loose stylistic prompts: “lofi synth jazz beats,” “Scandi piano trio,” “psychedelic folk with sitar and strings,” or whatever genre-haunting vibe appeals.
    • Generate dozens (or hundreds) of tracks. Don’t worry if most are duds — part of the ritual is surfing the slop.
    • Keep a small handful that spark something: a riff, a texture, an atmosphere.

    2. Conjure the band

    • Imagine who could be behind this sound. A trio? A producer? A rotating collective?
    • Name them, sketch their backstories, even generate portraits if you like.
    • The band is a mask: it makes the output feel inhabited, not just spat out by a machine.

    3. Add the frame

    • Every band needs an album, EP, or concept. Pick a title that sets the mood (Just North of Midnight, Spectral Mixtape Vol. 1, Songs for an Abandoned Mall).
    • Create minimal visuals — a cover, a logo, a fake gig poster. The paratexts do heavy lifting in conjuring coherence.

    4. Curate the release

    • From the pile of generations, select a set that holds together. Think sequencing, flow, contrasts — enough to feel like an album, not a playlist.
    • Don’t be afraid to include misfires or weird divergences if they tell part of the story.

    5. Listen differently

    • Treat the result as both artefact and experiment. Notice where it feels joyous, uncanny, or empty.
    • Ask: what is my band teaching me about AI systems, creativity, and culture?

    Like many others, I’m sure, it took me a while to really appreciate jazz. For the longest time, for an ear tuned to consistent, unchanging monorhythms, clear structures, and simple chords and melodies, it just sounded like so much noise. It wasn’t until I became a little better at piano, but really until I saw jazz played live, and started following jazz musicians, composers, and theorists online, that I became fascinated by the endless inventiveness and ingenuity of these musicians and this music.

    This exploration, rightly, soon expanded into the origins, people, stories, and cultures of this music. This is a music born of pain, trauma, struggle, injustice. It is a music whose pioneers, masters, apprentices, advocates, have been pilloried, targeted, attacked, and abused, because of who they are, and what they were trying to express. Scandinavian jazz, and European jazz in general, is its own special problematic beast. At best, it is a form of cultural appropriation, at worst, it is an offensive cultural colonialism.

    Here I was, then, conjuring music from my imaginary Scandi jazz band in Suno, in the full knowledge that even this experiment, this act of play, brushes up against both a fraught musical history, as well as ongoing debates and court cases on creativity, intellectual property, and generative systems.

    Play is how I probe the edges of these systems, how I test what they reveal about creativity, culture, and myself. But for the first time, the baseline ‘ickiness’ I feel around the ethics of AI systems became almost emotional, even physiological. I wasn’t just testing outputs, but testing myself: the churn of affect, the strangeness in my body, the sick-fascinated thrill of watching the machine spit out something that felt like an already-loaded form of music, again and again. Addictive, uncanny, grotesque.

    It’s addictive, in part, because it’s so fast. You put in a few words, generate or enter some lyrics, and within two minutes you have a functional piece of music that sounds 80 or 90% produced and ready to do whatever you want with. Each generation is wildly different if you want it to be. You might also generate a couple of tracks in a particular style, enable the cover version feature, and hear those same songs in a completely different tone, instrumentation, genre. In the midst of generating songs, it felt like I was playing or using some kind of church organ-cum-starship enterprise-cum-dream materialiser…. the true sensation of non-stop slop.

    What perhaps made it more interesting was the vague sense that I was generating something like an album, or something like a body of work within a particular genre and style. That meant that when I got a surprising result, I had to decide whether this divergence from that style was plausible for the spectral composer in my head.

    But behind this spectre-led exhilaration: the shadow of a growing unease.

    ‘Forever’, by Lars Vintersholm & Triple L (ft. Magnus LeClerq), from the album Just North of Midnight.

    AI-generated music used to only survive half-scrutiny: fine as background noise, easy to ignore. They still can be — but with the right prompts and tweaks, the outputs are now more complex, even if not always more musical or artistic.

    If all you want is a quick MP3 for a short film or TikTok, they’re perfect. If you’re a musician pulling stems apart for remixing or glitch experiments, they’re interesting too — but the illusion falls apart when you expect clean, studio-ready stems. Instead of crisp, isolated instruments, you hear the model’s best guesses: blobs of sound approximating piano, bass, trumpet. Like overhearing a whole track, snipping out pieces that sound instrument-like, and asking someone else to reassemble them. The seams show. Sometimes the stems are tidy, but when they wobble and smear, you catch a glimpse of how the machine is stitching its music together.

    The album Just North of Midnight only exists because I decided to make something out of the bizarre and queasy experience of generating a pile of AI songs. It exists because I needed a persona — an artist, a creative driver, a visionary — to make the tension and the weirdness feel bearable or justified. The composer, the trio, the album art, the biographies: all these extra elements, whether as worldbuilding or texture, lend (and only lend) a sense of legitimacy and authenticity to what is really just an illusion of a coherent, composed artefact.

    For me, music is an encounter and an entanglement — of performer and instrument, artist and audience, instrument and space, audience and space, hard notes and soft feel. Film, by contrast (at least for me), is an assemblage — sound and vision cut and layered for an audience. AI images or LLM outputs feel assemblage-like too: data, models, prompts, outputs, contexts stitched together. AI music may be built on the same mechanics, but I experience it differently. That gap — between how it’s made and how it feels — is why AI music strikes me as strange, eerie, magical, uncanny.

    ‘Seasonal Blend’, by Lars Vintersholm & Triple L, from the album Just North of Midnight.

    So what’s at stake here? AI music unsettled me because it plays at entanglement without ever truly achieving it. It mimics encounter while stitching together approximations. And in that gap, I — perhaps properly for the first time — glimpsed the promise and danger of all AI-generated media: a future where culture collapses into an endless assemblage of banal, plausible visuals, sounds, and words. This is a future that becomes more and more likely unless we insist on the messy, embodied entanglements that make art matter: the contexts and struggles it emerges from, the people and stories it carries, the collective acts of making and appreciating that bind histories of pain, joy, resistance, and creativity.


    Listen to the album Just North of Midnight in its complete strangeness on SoundCloud.

  • Unknown Song By…

    A USB flash drive on a wooden surface.

    A week or two ago I went to help my Mum downsize before she moves house. As with any move, there was a lot of accumulated ‘stuff’ to go through; of course, this isn’t just manual labour of sorting and moving and removing, but also all the associated historical, emotional, material, psychological labour to go along with it. Plenty of old heirlooms and photos and treasures, but also a ton of junk.

    While the trip out there was partly to help out, it was also to claim anything I wanted, lest it accidentally end up passed off or chucked away. I ended up ‘inheriting’ a few bits and bobs, not least of which an old PC, which may necessitate a follow-up to my tinkering earlier this year.

    Among the treasures I claimed was an innocuous-looking black and red USB stick. On opening up the drive, I was presented with a bunch of folders, clearly some kind of music collection.

    While some — ‘Come Back Again’ and ‘Time Life Presents…’ — were obviously albums, others were filled with hundreds of files. Some sort of library/catalogue, perhaps. Most intriguing, though, not to mention intimidating, was that many of these files had no discernible name or metadata. Like zero. Blank. You’ve got a number for a title, duration, mono/stereo, and a sample rate. Most are MP3s, there are a handful of WAVs.

    Cross-checking dates and listening to a few of the mystery files, Mum and I figured out that this USB belonged to a late family friend. This friend worked for much of his life in radio; this USB was the ‘core’ of his library, presumably that he would take from station to station as he moved about the country.

    Like most media, music happens primarily online now, on platforms. For folx of my generation and older, it doesn’t seem that long ago that music was all physical, on cassettes, vinyl, CDs. But then, seemingly all of a sudden, music happened on the computer. We ripped all our CDs to burn our own, or to put them on an MP3 player or iPod, or to build up our libraries. We downloaded songs off LimeWire or KaZaA, then later torrented albums or even entire discographies.

    With physical media, the packaging is the metadata. Titles, track listings, personnel/crew, descriptions and durations adorn jewel cases, DVD covers, liner notes, and so on. Being thrust online as we were, we relied partly on the goodwill and labour of others — be they record labels or generous enthusiasts — to have entered metadata for CDs. On the not infrequent occasion where we encountered a CD without this info, we had to enter it ourselves.

    Wake up and smell the pixels. (source)

    This process ensured that you could look at the little screen on your MP3 player or iPod and see what the song was. If you were particularly fussy about such things (definitely not me) you would download album art to include, too; if you couldn’t find the album art, it’d be a picture of the artist, or of something else that represented the music to you.

    This labour set up a relationship between the music listener and their library; between the user and the file. The ways that software like iTunes or Winamp or Media Player would catalogue or sort your files (or not), and how your music would be presented in the interface; these things changed your relationship to your music.

    Despite the incredible privilege and access that apps like Spotify, Apple Music, Tidal, and the like, offer, we have these things at the expense of this user-file-library relationship. I’m not placing a judgement on this, necessarily, just noting how things have changed. Users and listeners will always find meaningful ways to engage with their media: the proliferation of hyper-specific playlists for each different mood or time of day or activity is an example of this. But what do we lose when we no longer control the metadata?

    On that USB I found, there are over 3500 music files. From a quick glance, I’d say about 75% have some kind of metadata attached, even if it’s just the artist and song title in the filename. Many of the rest, we know for certain, were directly digitised from vinyl, compact cassette, or spooled tape (for a reel-to-reel player). There is no automatic database search for these files. Dipping in and out, it will likely take me months to listen to the songs, note down enough lyrics for a search, then try to pin down which artist/version/album/recording I’m hearing. Many of these probably won’t exist on apps like Spotify, or even in dingy corners of YouTube.

    A detective mystery, for sure, but also a journey through music and media history: and one I’m very much looking forward to.