The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: technology

  • Unknown Song By…

    A USB flash drive on a wooden surface.

    A week or two ago I went to help my Mum downsize before she moves house. As with any move, there was a lot of accumulated ‘stuff’ to go through; of course, this isn’t just manual labour of sorting and moving and removing, but also all the associated historical, emotional, material, psychological labour to go along with it. Plenty of old heirlooms and photos and treasures, but also a ton of junk.

    While the trip out there was partly to help out, it was also to claim anything I wanted, lest it accidentally end up passed off or chucked away. I ended up ‘inheriting’ a few bits and bobs, not least of which an old PC, which may necessitate a follow-up to my tinkering earlier this year.

    Among the treasures I claimed was an innocuous-looking black and red USB stick. On opening up the drive, I was presented with a bunch of folders, clearly some kind of music collection.

    While some — ‘Come Back Again’ and ‘Time Life Presents…’ — were obviously albums, others were filled with hundreds of files. Some sort of library/catalogue, perhaps. Most intriguing, though, not to mention intimidating, was that many of these files had no discernible name or metadata. Like zero. Blank. You’ve got a number for a title, duration, mono/stereo, and a sample rate. Most are MP3s, there are a handful of WAVs.

    Cross-checking dates and listening to a few of the mystery files, Mum and I figured out that this USB belonged to a late family friend. This friend worked for much of his life in radio; this USB was the ‘core’ of his library, presumably that he would take from station to station as he moved about the country.

    Like most media, music happens primarily online now, on platforms. For folx of my generation and older, it doesn’t seem that long ago that music was all physical, on cassettes, vinyl, CDs. But then, seemingly all of a sudden, music happened on the computer. We ripped all our CDs to burn our own, or to put them on an MP3 player or iPod, or to build up our libraries. We downloaded songs off LimeWire or KaZaA, then later torrented albums or even entire discographies.

    With physical media, the packaging is the metadata. Titles, track listings, personnel/crew, descriptions and durations adorn jewel cases, DVD covers, liner notes, and so on. Being thrust online as we were, we relied partly on the goodwill and labour of others — be they record labels or generous enthusiasts — to have entered metadata for CDs. On the not infrequent occasion where we encountered a CD without this info, we had to enter it ourselves.

    Wake up and smell the pixels. (source)

    This process ensured that you could look at the little screen on your MP3 player or iPod and see what the song was. If you were particularly fussy about such things (definitely not me) you would download album art to include, too; if you couldn’t find the album art, it’d be a picture of the artist, or of something else that represented the music to you.

    This labour set up a relationship between the music listener and their library; between the user and the file. The ways that software like iTunes or Winamp or Media Player would catalogue or sort your files (or not), and how your music would be presented in the interface; these things changed your relationship to your music.

    Despite the incredible privilege and access that apps like Spotify, Apple Music, Tidal, and the like, offer, we have these things at the expense of this user-file-library relationship. I’m not placing a judgement on this, necessarily, just noting how things have changed. Users and listeners will always find meaningful ways to engage with their media: the proliferation of hyper-specific playlists for each different mood or time of day or activity is an example of this. But what do we lose when we no longer control the metadata?

    On that USB I found, there are over 3500 music files. From a quick glance, I’d say about 75% have some kind of metadata attached, even if it’s just the artist and song title in the filename. Many of the rest, we know for certain, were directly digitised from vinyl, compact cassette, or spooled tape (for a reel-to-reel player). There is no automatic database search for these files. Dipping in and out, it will likely take me months to listen to the songs, note down enough lyrics for a search, then try to pin down which artist/version/album/recording I’m hearing. Many of these probably won’t exist on apps like Spotify, or even in dingy corners of YouTube.

    A detective mystery, for sure, but also a journey through music and media history: and one I’m very much looking forward to.

  • Elusive images

    Generated with Leonardo.Ai, prompts by me.

    Up until this year, AI-generated video was something of a white whale for tech developers. Early experiments resulted in janky-looking acid dream GIFs; vaguely recognisable frames and figures, but nothing in terms of consistent, logical motion. Then things started to get a little, or rather a lot, better. Through constant experimentation and development, the nerds (and I use this term in a nice way) managed to get the machines (and I use this term in a knowingly reductive way) to produce little videos that could have been clips from a film or a human-made animation. To reduce thousands of hours of math and programming into a pithy quotable, the key was this: they encoded time.

    RunwayML and Leonardo.Ai are probably the current forerunners in the space, allowing text-to-image-to-(short)video as a seamless user-driven process. RunwayML also offers text-to-audio generation, which you can then use to generate an animated avatar speaking those words; this avatar can be yourself, another real human, a generated image, or something else entirely. There’s also Pika, Genmo and many others offering variations on this theme.

    Earlier this year, OpenAI announced Sora, their video generation tool. One assumes this will be built into ChatGPT, the chatbot which is serving as the interface for other OpenAI products like DALL-E and custom GPTs. The published results of Sora are pretty staggering, though it’s an open secret that these samples were chosen from many not-so-great results. Critics have also noted that even the supposed exemplars have their flaws. Similar things were said about image generators only a few years ago, though, so one assumes that the current state of things is the worst it will ever be.

    Creators are now experimenting with AI films. The aforementioned RunwayML is currently running their second AI Film Festival in New York. Many AI films are little better than abstract pieces that lack the dynamism and consideration to be called even avant-garde. However, there are a handful that manage to transcend their technical origins. But how this is not true of all media, all art, manages to elude critics and commentators, and worst of all, my fellow scholars.

    It is currently possible, of course, to use AI tools to generate most components, and even to compile found footage into a complete video. But this is an unreliable method that offers little of the creative control that filmmakers might wish for. Creators employ an infinite variety of different tools, workflows, and methods. The simplest might prompt ChatGPT with an idea, ask for a fleshed-out treatment, and then use other tools to generate or source audiovisual material that the user then edits in software like Resolve, Final Cut or Premiere. Others build on this post-production workflow by generating music with Suno or Udio; or they might compose music themselves and have it played by an AI band or orchestra.

    As with everything, though, the tools don’t matter. If the finished product doesn’t have a coherent narrative, theme, or idea, it remains a muddle of modes and outputs that offers nothing to the viewer. ChatGPT may generate some poetic ideas on a theme for you, but you still have to do the cognitive work of fleshing that out, sourcing your media, arranging that media (or guiding a tool to do it for you). Depending on what you cede to the machine, you may or may not be happy with the result — cue more refining, revisiting, more processing, more thinking.

    AI can probably replace us humans for low-stakes media-making, sure. Copywriting, social media ads and posts, the nebulous corporate guff that comprises most of the dead internet. For AI video, the missing component of the formula was time. But for AI film, time-based AI media of any meaning or consequence, encoding time was just the beginning.

    AI media won’t last as a genre or format. Call that wild speculation if you like, but I’m pretty confident in stating it. AI media isn’t a fad, though, I think, in the same ways that blockchain and NFTs were. AI media is showing itself to be a capable content creator and creative collaborator; events like the AI Film Festival are how these tools test and prove themselves in this regard. To choose a handy analogue, the original ‘film’ — celluloid exposed to light to capture an image — still exists. But that format is distinct from film as a form. It’s distinct from film as a cultural idea. From film as a meme or filter. Film, somehow, remains a complex cultural assemblage of technical, social, material and cultural phenomena. Following that historical logic, I don’t think AI media will last in its current technical or cultural form. That’s not to say we shouldn’t be on it right now: quite the opposite, in fact. But to do that, don’t look to the past, or to textbooks, or even to people like me, to be honest. Look to the true creators: the tinkerers, the experimenters, what Apple might once have called the crazy ones.

    Creators and artists have always pushed the boundaries, have always guessed at what matters and what doesn’t, have always shared those guesses with the rest of us. Invariably, those guesses miss some of the mark, but taken collectively they give a good sense of a probable direction. That instinct to take wild stabs is something that LLMs, even a General Artificial Intelligence, will never be truly capable of. Similarly, the complexity of something like, for instance, a novel, or a feature film, eludes these technologies. The ways the tools become embedded, the ways the tools are treated or rejected, the ways they become social or cultural; that’s not for AI tools to do. That’s on us. Anyway, right now AI media is obsessed with its own nature and role in the world; it’s little better than a sequel to 2001: A Space Odyssey or Her. But like those films and countless other media objects, it does itself show us some of the ways we might either lean in to the change, or purposefully resist it. Any thoughts here on your own uses are very welcome!

    The creative and scientific methods blend in a fascinating way with AI media. Developers build tools that do a handful of things; users then learn to daisy-chain those tools together in personal workflows that suit their ideas and processes. To be truly innovative, creators will develop bold and strong original ideas (themes, stories, experiences), and then leverage their workflows to produce those ideas. It’s not just AI media. It’s AI media folded into everything else we already do, use, produce. That’s where the rubber meets the road, so to speak; where a tool or technique becomes the culture. That’s how it worked with printing and publishing, cinema and TV, computers, the internet, and that’s how it will work with AI. That’s where we’re headed. It’s not the singularity. It’s not the end of the world. it’s far more boring and fascinating than either of those could ever hope to be.

  • Blinded by machine visions

    A grainy, indistinct black and white image of a human figure wearing a suit and tie. The bright photo grain covers his eyes like a blindfold.
    Generated with Adobe Firefly, prompts by me.

    I threw around a quick response to this article on the socials this morning and, in particular, some of the reactions I was seeing. Here’s the money quote from photographer Annie Leibovitz, when asked about the effects of AI tools, generative AI technology, etc, on photography:

    “That doesn’t worry me at all,” she told AFP. “With each technological progress, there are hesitations and concerns. You just have to take the plunge and learn how to use it.”1

    The paraphrased quotes continue on the following lines:

    She says AI-generated images are no less authentic than photography.

    “Photography itself is not really real… I like to use PhotoShop. I use all the tools available.”

    Even deciding how to frame a shot implies “editing and control on some level,” she added.2

    A great many folx were posting responses akin to ‘Annie doesn’t count because she’s in the 1%’ or ‘she doesn’t count because she’s successful’, ‘she doesn’t have to worry anymore’ etc etc.

    On the one hand it’s typical reactionary stuff with which the socials are often ablaze. On the other hand, it’s fair to fear the impact of a given innovation on your livelihood or your passion.

    As I hint in my own posts3, though, I think the temptation to leap on this as privilege is premature, and a little symptomatic of whatever The Culture and/or The Discourse is at the moment, and has been for the duration of the platformed web, if not much longer.

    Leibovitz is and has always been a jobbing artist. Sure, in later years she has been able to pick and choose a little more, but by all accounts she is a busy and determined professional, treating every job with just as much time, effort, dedication as she always has. The work, for Leibovitz, has value, just as much — if not more — than the product or the paycheck.

    I don’t mean to suddenly act my age, or appear much older and grumpier than I am, but I do wonder about how much time aspiring or current photographers spend online discussing and/or worrying and/or reacting to the latest update or the current fad-of-the-moment. I 100% understand the need for today’s artists and creators to engage in some way with the social web, if only to put their names out there to try and secure work. But if you’re living in the comments, whipping yourselves and others into a frenzy about AI or whatever it is, is that really the best use of your time?

    The irony of me asking such questions on a blog where I do nothing but post and react is not lost on me, but this blog for me is a scratchpad, a testing ground, a commonplace book; it’s a core part of my ‘process’, whatever that is, and whatever it’s for. This is practice for other writing, for future writing, for my identity, career, creative endeavours as a writer. It’s a safe space; I’m not getting angry (necessarily), or seeking out things to be angry about.

    But I digress. Leibovitz is not scared of AI. And as someone currently working in this space, I can’t disagree. Having even a rudimentary understanding of what these tools are actually doing will dispel some of the fear.

    Further, photography, like the cinema that it birthed, has already died a thousand deaths, and will die a thousand more.

    Brilliant4 photography lecturer and scholar Alison Bennett speaks to the legacy and persistence of photographic practice here:

    “Recent examples [of pivotal moments of change in photography] include the transition from analogue film to digital media in the late 20th century, then the introduction of the internet-connected smart phone from 2007,” they said.

    “These changes fundamentally redefined what was possible and how photography was used.

    “The AI tipping point is just another example of how photography is constantly being redefined.”5

    As ever, the tools are not the problem. The real enemies are the companies and people that are driving the tools into the mainstream at scale. The companies that train their models on unlicensed datasets, drawn from copyrighted material. The people that buy into their own bullshit about AI and AGI being some kind of evolutionary and/or quasi-biblical moment.

    For every post shitting on Annie Leibovitz, you must have at least twenty posts actively shitting on OpenAI and their ilk, pushing for ethically-sourced and maintained datasets, pushing for systemic change to the resource management of AI systems, including sustainable data centers.

    The larger conceptual questions are around authenticity and around hard work. If you use AI tools, are you still an authentic artist? Aren’t AI tools just a shortcut? Of course, the answers are ‘not necessarily’. If you’ve still done the hard yards to learn about your craft, to learn about how you work, to discover what kinds of stories and experiences you want to create, to find your voice, in whatever form it takes, then generative AI is a paintbrush. A weird-looking paintbrush, but a paintbrush nevertheless (or plasticine, or canvas, or glitter, or an app, etc. etc. ad infinitum).

    Do the work, and you too can be either as ambivalent as Leibovitz, or as surprised and delighted as you want to be. Either way, you’re still in control.

    Notes ↩︎

    1. Agence France-Presse 2024, ‘Photographer Annie Leibovitz: “AI doesn’t worry me at all”’, France 24, viewed 26 March 2024, <https://www.france24.com/en/live-news/20240320-photographer-annie-leibovitz-ai-doesn-t-worry-me-at-all>.
      ↩︎
    2. ibid. ↩︎
    3. See here, and with tiny edits for platform affordances here and here. What’s the opposite of POSSE? PEPOS? ↩︎
    4. I am somewhat biased as, at the time of writing, Dr. Bennett and I currently share a place of work. To look through their expanded (heh) works, go here. ↩︎
    5. Odell, T 2024, ‘New exhibition explores AI’s influence on the future of photography’, RMIT University, viewed 26 March 2024, <https://www.rmit.edu.au/news/all-news/2024/mar/photo-2024>.
      ↩︎
  • Operation Tech Revival, Part 3

    Read Part 1 here, and Part 2 here.

    Obligatory artfully-cropped stock photo of a completely different Macbook model to the one discussed in this post. Photo by Math on Pexels.com.

    Part 3: Give me my MacBook back, Mac.

    2012 was a big year. The motherland had the Olympics and Liz’s Diamond Jubilee; elsewhere, the Costa Concordia ran aground; Curiosity also made landfall, but intentionally, on Mars; and online it was nothing but Konys, Gangnam Styles and Overly Attached Girlfriends as far as the eye could see.

    For me, I was well into my PhD, around the halfway mark; I’d also scaled back full-time media production work for that reason, and was picking up the odd shift at Video Ezy again. It was also the year that I upgraded to a late 2011 MacBook Pro. I think I had had one Macbook before then, possibly purchased in 2007-8; prior to this a Windows machine that was nicked from my inner west apartment around 2009, along with a lovely Sony Alpha camera (vale).

    I can’t believe this image persists on Flickr. Here’s the same machine, with its nice black suit on, in situ during the completion of said PhD!

    The 2011 MacBook served me well until early 2015, when I was given the first work machine, which I’m fairly sure was a late 2014 MBP. I tried to revive the 2011 machine once before, when my partner needed a laptop for study; however, when in early 2020 it took approximately 5 minutes to load a two-page PDF, we thought maybe it was time to put it away. For some reason though, I just held onto it, and it sat idle in the cupboard, until a week or two ago, when I caught myself thinking: what if…?

    So having more or less sorted the Raspberry Pi, I turned my attention to this absolute chunkster of a laptop. It’s amazing how the sizes and shapes of tech come in and out of vogue. The 2011 MBP is obviously heavier than the work laptop, but not by as much as you’d think (2.04kg vs. 1.6kg for my 2020 M1 machine), with roughly the same screen size. Obviously, though, the older model has much thicker housing (h2.4cm w32.5cm d22.7cm vs. h1.56 w30.41 d21.24cm). Anyway, some swift searching about (by myself but mainly by my best mate, who also has huge interest in older tech, both hardware and software) led to iFixIt, where a surprisingly small amount of money resulted in an all-in-one 500GB SSD upgrade kit arriving within a few days.

    I aspire to the perfect techbro desktop-fu. How did I do?

    I had some time to kill late last week, so I set about changing the hard drives. It was also the perfect opportunity to brush away many years of accumulated dust, and a can of compressed air took care of the trickier areas. With the help of tutorials and such, all of this took under half an hour. What filled the rest of the allotted time was sorting out boot disks for OS X. Internet Recovery was no-go at first, but with several failed attempts at downloading the appropriately agėd version, I tried once again. No good. Cue forum and Reddit diving for an hour or two, before finally obtaining what seemed to be the correct edition of High Sierra, without several probably-very-necessary security patches and so on.

    Anyway, I managed to boot up High Sierra off an ancient USB, got it installed on the SSD, and then very quickly realised that while the SSD certainly afforded greater speed than before, High Sierra was virtually unusable apart from the already installed apps and a browser. I knew I could probably try to upgrade to Mojave or maybe even Big Sur, but even with the SSD, I wasn’t sure how well it would run; and it was still tough to find usable images for those versions of macOS. But somewhere in my Reddit and forum explorations I’d seen that some had succeeded in installing Linux on their older machines, and that it had run as well and/or even better than whatever the latest macOS was that they could use.

    Two laptops, both alike in backlit keyboard, on fair floor where we lay our scene.

    Thanks to the Pi, I had a little familiarity with very basic Linux OS’s (aka DISTROS, yeah children I can use the LINGO I am heaps 1337); it was down to whether the MBP could run Ubuntu, or whether Mint or Elementary would be more efficient. In the end, I went with Mint, and so far so good? It’s a little laggy, particularly if multiple apps are open; I’m drafting this in Obsidian and the response isn’t great. I would also note that the systems’s fan is on, and loud, most of the time, even with mbpfan running. The resolution on my 4K monitor is worse than the Pi, of course, but this is due to the lack of direct HDMI output from the MBP; I’m using a Thunderbolt to HDMI adapter. That said, maybe I just have to tweak some settings.

    A glimpse behind the curtain.

    In the meantime, it’s been fun to play in a new OS; Mint feels very Windows-esque, though with some features that felt very intuitive to a longer-term Mac user. Being restricted to maybe a maximum of five apps running simultaneously means I have to be conscious of what I’m doing: this actually helps me plan my workspace and my worktime more carefully. I’m using this as a personal machine, so mostly for creative writing and blogging; in general, it affords more than enough power to do a little research, take notes, draft work. If there’s anything more complex, I’ll probably have to shift to the work machine, though I did clock ShotCut and GIMP being available for basic video/image work, and obviously there’s Audacity and similar for audio.

    Physically, the MBP sits flat on my desktop in front of the monitor. Eventually I will probably get a monitor arm, so it can slide back a little further. Swapping it out for my work machine isn’t too difficult; I just have to plug the HDMI into a USB-C dongle that permanently has a primary external drive, webcam and mic hooked up to it. Now that I think of it, my monitor probably has more than one HDMI input, so potentially I could just add a second HDMI cable to that arrangement and save a step. Something to try once this is posted! I’m still in a bit of cable hell, as well, due to just wanting the simplicity of plugging in a USB keyboard and mouse to the old Macbook; over the next week or two I’ll try to configure the Bluetooth accessories for bit more desktop breathing room.

    Behold the crisp image quality of the iPhone 8 (an old-tech story for another time…).

    Apart from these little tweaks, the only ‘major’ thing I want to tweak short-term is the Linux distro; it just feels like Mint Cinnamon may be pushing the system a little too hard. Mint does offer two lighter variants, MATE and Xfce, though I also did download Elementary and Ubuntu MATE. Mint MATE for the MBP, I reckon, and then maybe even Ubuntu MATE on the Pi. To be fair, though, most of the time the machine is struggling, I have Chrome open, so I could also just try a lighter browser, like one of your Chromiums or your Midoris.

    Looking back over this drafted post, it reads like I know way more about this than I actually do. Like I’m just flashing drives and rebooting systems and slinging OS’s and SSD’s like it’s nobody’s business. To be clear: I absolutely don’t. Most of the time it was either my aforementioned best mate who knew much more about all of this stuff than I ever did, or other tech-savvy friends or colleagues; my machines have always been repaired, maintained, serviced by Mac folx, or I would just restart and hope for the best. I have a working knowledge of basic computer operation, but that barely extends to the command line, which I think I’ve used more in the last week than across my entire life. As discussed here, I don’t really code either. Most of this, for me, is just trial and error; I guess my only ‘rules’ are reading up as much as I can on what’s worked/not for other people, and trying not to take too many unnecessary risks in terms of system security or hardware tinkering. The risk in this instance is also lessened by the passing of time: warranties are well out of date and thus won’t be voided by yanking out components.

    As a media/materialism scholar, I know conceptually/theoretically that sleek modern devices and the notion of ‘the cloud’ belies the awful truth about extractive practices, exploited workforces, and non-renewable materials. Reading and writing about it is one thing; to see the results of all of that very plainly laid out on your desk is quite another. One cannot ignore the reality of the tech industry and how damaging it has been and continues to be. In the same vein, though, I’m glad that these particular materials and components won’t be heading to landfills (or more hopefully, some kind of recycling centre) for a little while longer.

  • Operation Tech Revival, Part 2

    Read Part 1 here.

    Photo by Alessandro Oliverio on Pexels.com

    Part 2: Mmm, Pi.

    A few years back I bought a Raspberry Pi 3B+, with the intention of using it as a safe little sandbox for learning to code. I thought maybe I would buy up some components and make little robots or something, maybe a web server or the like. Who knows, one day I may still do all of these things (and/or continue learning Python, which I abandoned at about the Functions mark).

    The Pi was a fun little thing to boot up every now and again when my primary computer became too slow/overwhelmed through lockdowns, or when I became overwhelmed by to-do’s, notifications, projects, etc, on my work machine. It really only has enough juice to run a web browser with one or two tabs, or LibreOffice Writer for basic word processing/drafting.

    But I never really considered how the Pi might fit into my overall tech set-up, or whether it might actually be suitable as a regular machine at all.

    I’ve always been intrigued by people returning to simpler modes of engaging with tech, particularly those in knowledge work where plenty of writing or focus time is required. Devices like the Freewrite, the AlphaSmart, the ReMarkable, all speak to a desire for writing with less bells and whistles, less , more focus and control over your ‘machinespace’, if not your actual space or environment.

    Cue late last year and early this year, where I started thinking more seriously about writing more regularly, particularly for this here blog. Cue also the aforementioned death of the Mac, and desire to revive some old tech, and maybe the Pi is just the right (write?) minimalist tool for the job. With an internet connection and basic desktop functions it’s not exactly a ‘dumb’ device, but I figured it might be a nice restricted environment to get some words pumped out.

    Booting it up again, there was the old OS, Raspbian, a basic standard desktop wallpaper, and a Documents folder festooned with abandoned coding practice files. I figured starting from scratch might be a good idea. I won’t bore you with the details, but suffice to say sorting out which version of the new Raspberry Pi OS would work best on an older model of Pi was… taxing. Between the Pi and the Macbook I do want to be able to use at least some of my main apps/tools etc, including Obsidian, but finding a version of such programs that are compatible with both older hardware and older systems is fairly painful.

    Whenever I plug into ethernet, I feel like I’m going into lightspeed.

    For now, I’m running 32-bit Raspberry Pi OS. There’s no Obsidian (that may have to remain on the work laptop/iOS devices depending on how the 2011 Macbook goes), but I’ve got a basic version of LibreOffice up and running for docs, presentations, spreadsheets. The process really inspired me to try and get back into Python, if only to build up a working knowledge of it over the rest of this year. While more complex projects may function better on one of the bigger machines, I can at least use the Pi as a dedicated coding tool for now. Depending on how it all goes, I may end up trying some of those robotics or server projects I was daydreaming about.

    “Get outta my dreams; Get into my car…”

    I’m running this bad boy with the top down. Do Pi people say ‘with the top down’? I don’t really care, to be honest. I just mean I took the lid off because the poor little thing got quite hot, what with being wiped and reloaded 3-4 times over the course of a few hours. For shits and gigs, I also love hooking the Pi up to my enormous 4K monitor; pretty remarkable that this tiny little box can project to a display so huge with decent resolution.

    Once again, precisely how it fits into my workflows, processes, projects, let alone how it could remain semi-permanently in or on the physical workspace, remains to be seen. It was fun, though, to get it back to zero, to a place where I can answer some of those questions as I move forward.

    Speaking of moving forward, the doorbell just rang; I think a solid state hard drive just arrived. Which means the Pi is done for now… next up, the Macbook…