
A little while ago, I spoke with machine learning engineer and responsible AI expert Bogdana Rakova about my approach to generative AI education and research: embracing the weird, messy, and broken aspects of these technologies rather than trying to optimise them.
This conversation was part of Bogdana’s expert interview series on ‘Speculative F(r)iction in AI Use and Governance,’ examining form, function, fiction, and friction in AI systems.
We discussed my classroom experiments mixing origami with code, the ‘Fellowship of Tiny Minds’ AI pedagogy project, and why I deliberately push AI systems to their breaking points. The conversation explores how glitches and so-called ‘hallucinations’ can reveal deeper truths about how these systems work, and why we need more playful, hands-on approaches to AI literacy.
The piece connects to my ongoing research into everyday AI: examining glitch as a tactic of resistance, the time-looped recursive futures of the Slopocene, and experimental methods for rethinking creativity, labour, and literacy in an era of machine assistants.
Read the full chat at this link, and share your creative responses on the page if you’re moved to!
Leave a Reply