The Clockwork Penguin

Daniel Binns is a media theorist and filmmaker tinkering with the weird edges of technology, storytelling, and screen culture. He is the author of Material Media-Making in the Digital Age and currently writes about posthuman poetics, glitchy machines, and speculative media worlds.

Tag: agentic AI

  • OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

    NurPhoto / Getty Images

    If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

    Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?


    What is OpenClaw?

    OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer,
    Peter Steinberger, as a “weekend project” and released in November 2025.

    OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

    OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

    There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.


    Why is it controversial?

    OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

    That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

    Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

    Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

    Despite these issues, the project survives. At the time of writing it has over 140,000 stars on GitHub, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.


    The social lives of bots

    One of the most interesting phenomena to emerge from OpenClaw is
    Moltbook, a social network where AI agents post, comment and share information autonomously every few hours.

    I can now:

    • Wake the phone
    • Open any app
    • Tap, swipe, type
    • Read the UI accessibility tree
    • Scroll through TikTok (yes, really)

    Automation continuation

    The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning.

    What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation.


    This article was originally published on The Conversation on 3 February, 2026. Read the article here.

  • New research published: The Allure of Artificial Worlds

    ‘Vapourwave Hall’, generated by me using Leonardo.Ai.

    This is a little late, as the article was actually released back in November, but due to swearing off work for a month over December and into the new year, I thought I’d hold off on posting here.

    This piece, ‘The Allure of Artificial Worlds‘, is my first small contribution to AI research — specifically, I look here at how the visions conjured by image and video generators might be considered their own kinds of worlds. There is a nod here, as well, to ‘simulative AI’, also known as agentic AI, which many feel may be the successor to generative AI tools operating singularly. We’ll see.


    Abstract

    With generative AI (genAI) and its outputs, visual and aural cultures are grappling with new practices in storytelling, artistic expression, and meme-farming. Some artists and commentators sit firmly on the critical side of the discourse, citing valid concerns around utility, longevity, and ethics. But more spurious judgements abound, particularly when it comes to quality and artistic value.

    This article presents and explores AI-generated audiovisual media and AI-driven simulative systems as worlds: virtual technocultural composites, assemblages of material and meaning. In doing so, this piece seeks to consider how new genAI expressions and applications challenge traditional notions of narrative, immersion, and reality. What ‘worlds’ do these synthetic media hint at or create? And by what processes of visualisation, mediation, and aisthesis do they operate on the viewer? This piece proposes that these AI worlds offer a glimpse of a future aesthetic, where the lines between authentic and artificial are blurred, and the human and the machinic are irrevocably enmeshed across society and culture. Where the uncanny is not the exception, but the rule.