Projects
Soma Labels
BLUESKEYE AI
This research explores how AI can support embodied, expressive interaction between humans and robots. Rather than treating movement data as having single “correct” interpretations, our approach embraces the subjective, ambiguous, and culturally diverse nature of how bodies move and what movements mean.
Our work investigates several interconnected questions: How can robots learn expressive movements from the sparse demonstrations that artists and creative practitioners can provide? How might AI systems capture and transfer movement styles across different contexts and modalities? How can we design labelling and learning systems that celebrate divergent interpretations rather than treating variation as noise?
Recent explorations include generative models that enable robots to create novel, stylistically rich movements from small sets of expert demonstrations, and cross-modal systems that allow robots to learn movement qualities from human video. Throughout this work, we maintain a commitment to inclusive, body-centred perspectives, ensuring that AI systems can accommodate diverse bodies, movement practices, and critical viewpoints (feminist, crip, queer).
A core principle guiding this work is that movement data should support multiple valid interpretations. We’re developing tools and approaches that encourage subjective labelling, represent personal and embodied experiences, and accommodate different disciplinary perspectives on movement. Whether through labelling interfaces that embrace ambiguity, generative models that preserve expressive variation, or visualisation tools that let audiences compare diverse movement experiences, this work seeks to expand how AI can engage with the complexity of human movement.
This research enables new possibilities for human-robot creative collaboration, artistic expression, and embodied interaction while remaining open to emerging opportunities as the technology and artistic practice evolve together.
Outputs and Highlights
- NeurIPS 2025: “Learning to Move with Style: Few-Shot Cross-Modal Style Transfer for Creative Robot Motion Generation” accepted in NeurIPS 2025
- UKAI 2025: Presented “Beyond Functional Robots: Time-Series Transformers for Generating Robot Embraces” demonstrating how transformer architectures enable robots to generate expressive embracing movements
- CineKids Festival Amsterdam: Demonstrated AI-generated robot movements from the Embrace Angels artwork, showcasing how generative models can create expressive robot behaviours for public engagement