Press, PR & Media

Dafydd Sills Jones, Associate Professor at Auckland University of Technology visited the Virtual Immersive Production (VIP) Studio towards the end of 2025. During his visit he met Richard Ramchurn, lead on the Somabotics AI Lens project and joined an AI workshop held at the VIP studio.
Dafyyd fedback on the visit:
I was delighted to be included in an AI workshop during my visit to Nottingham catching up with Sarah Martindate, Creative Director of the Virtual and Immersive Production Studio and other colleagues. II had been working with Runway myself and had conversations with colleagues regarding using AI gen and machine-learning ai within a visual effects (vfx) workflow pipeline, for example in nuke.
I had also experienced, as had many, the beauty of the strange hallucinogenic imagery to come out of early Runway and Sora releases, and felt that there was something akin to it in experimental film, in that the creator gives up control to the ‘ghost in the machine’ and then uses the material that emerges as a departure point for further experimentation.
When I encountered AI lens in Nottingham, it combined that experimentalism with a liveness that has been brought back into high end digital production through real-time rendering. So now a conversation through text prompt, images/video prompt and author is possible, as well as a lack of control. So potentially here a balancing, or a riding, of the tension between authorial voice and the accidental aesthetics of the machine.
After experiencing the Nottingham system, I was keen to see what would happen if the environment behind the camera inputting into AI Lens was less stimulating, and whether the AI could be ‘slowed down’, as part of the more mainstream narrative concern with bringing ai-gen ‘under control’ and bending it towards coherence across shots. There is clearly a balance/tension between that aim and the desirability of the hallucinogenic and dialogic effects that can be produced through ai-gen.
We set up our own version of the AI lens workflow, with a laptop webcam, and tested with a greenscreen, and did indeed see that a blank background did slow things down a bit. We also found that shooting in black and white also slowed down the glitching.
We now want to take this into a VP studio and see what happened when we experiment with looping things:
Recording an ai-gen passage and then putting that on the VP LED screen, and have actors act in front of it
- Looping the image that comes out of the AI lens back onto the screen behind the figure that the camera itself is looking at
- Shooting a figure through AI lens with a green screen, and later compositing that onto pre-recorded ai lens material
- Taking a feed from somewhere else (Nottingham) and experimenting with ‘compositing’ of various kinds
- For example, putting the recorded/love feed from Nottingham onto the LED screen, and then filming a figure through AI-lens with that background
This looping and mixing could go on through as many variations as we wanted to try, and of course we would make discoveries along the way. But the main questions to be asked could be some of these:
- What conditions amplify AI Lens’s hallucinatory tendencies? What conditions stabilise or “calm” the system — and at what aesthetic cost? (Menkman’s instability as a revelatory state rather than a defect; Manovich’s sense that AI outputs reflect the tendencies of training conditions; Crawford’ notion that the behaviour of AI systems is shaped by constraints, infrastructures, and omissions)
- How much authorial voice can be relinquished before meaning collapses? When does letting go produce discovery rather than incoherence?
- What traces remain when AI systems process their own outputs repeatedly? Can looping function as a form of forensic or evidentiary aesthetics?
- How does AI Lens complicate the ontology of virtual production environments? What does “presence” mean when bodies, spaces, and images are computationally mediated?
- How does real-time interaction change our relationship to AI-generated imagery? Does liveness restore improvisation and performance to digital production?
- How does spatial and temporal distance become part of the aesthetic? Can cross-time-zone VP workflows generate new forms of shared authorship? V
Daffyd is currently Principal Investigator of the Portals/Traces project at the AUT Virtual Creative Design Research Centre
Daffyd is planning another visit to the UK this year, during which he and Richard will pick up on potential collaborative opportunities.