My Placement at the Virtual and Immersive Production Studio

post by Callum Berger (2021 cohort)

1. Placement at the Virtual and Immersive Production Studio

I started my placement back in July 2023, working with my industry partner AlbinoMosquito Productions Ltd run by Richard and Rachel Ramchurn based at the Virtual and Immersive Production (VIP) Studio at King’s Meadow Campus. The VIP studio houses a series of high-end technologies for artists and developers to use in creating projects and artwork. Technologies include motion capture, volumetric capture, pepper’s ghost holograms, tesla suits, and virtual reality (VR) headsets. Being one of the only studios to allow this range within the midlands, this was a great opportunity for me to work alongside other creators to gain experience into design and creation using a variety of technologies.

Throughout the first month of the placement, I worked alongside Richard on a series of different development projects, including working with artists to capture their performances. These performances ranged from tightrope artists to dancers with disabilities. It was incredible to see the range of diversity that these technologies allowed artists to express themselves.

2. Filming a Virtual Reality Experience

As the placement progressed into the second month, I began working with Richard to develop a virtual reality experience using volumetric capture. This project, funded by Arts Council England, aimed to produce an experience that took an audience through a futuristic Nottingham engulfed in a climate crisis, raising discourse with the audience around the current situation of the climate that we live in today. Planning for this project had begun in the previous year when Richard, Rachel, and myself looked into potential applications of a VR experience that was adaptive using brain data as a real-time input. Having decided on a climate crisis experience and the basis for this, we used the placement as an opportunity to bring this to life.

Working alongside AlbinoMosquito for the creation of a VR climate experience presents a series of challenges and opportunities, including moving from 2D to 3D space, digital environments, presentation of actors, and user interaction. Going beyond 2D film into a 3D space requires an awareness of visual surroundings. Users within the experience will have a 360-degree view of the environment, and therefore capturing actors and environments within a single camera viewpoint alone is not enough.

We decided to split the experience into a series of different scenes that would allow you to embody an individual from their perspective and witness different events unfold over time. The journey of the experience begins with a series of refugees moving to a camp within Nottingham, before engaging in a series of dialogue around concerns for their safety before tragedy befalls the group. I worked loosely with the writers for this, helping shape the direction toward more fearful aspects, in hopes of using this experience as part of my PhD.

Once the scripts were complete, we began casting for the experience, focusing on local actors to support the growth of the industry within Nottingham and surrounding areas. Casting involved potential actors sending in short demo tapes of themselves playing the role of a particular character they had been selected to act. From there, we made final decisions on who would be cast in the experience and were ready to move on to filming.

2.1  Filming with Volumetric Capture

Through testing, it was shown that fewer cameras used for each render produced a smaller workload and storage space requirement for the use of the captures. Therefore, Richard and I settled on ensuring that each scene used the fewest amount of cameras possible for each capture. The fewest cameras were decided on a trial and error approach that tested each scene setup and would determine the minimum amount of cameras necessary to capture all actors within the scene. This approach involved placing cameras that captured actors from the viewpoint of the user as shown in Figure 1. As the experience embodies the viewer in a point-of-view (POV) position, actors needed to be aware of where this virtual camera would be throughout filming. Therefore, we placed an ambisonic microphone in the position of the POV, allowing the actors to work around the space with awareness of the viewer’s location.

However, this presented an issue with calibration, as to set up the 3D space to be captured, cameras are required to be calibrated with each other which involves the need for overlapping space between cameras. To overcome this, pairing cameras were used so that calibration could be done. This allowed the calibration to work but now with the requirement of more cameras within a scene. Having returned to our previous predicament of too many cameras, I developed a program within Unity that would allow the removal of any chosen cameras from a recording. This took the metadata of a volumetric capture as input, calculated which cameras were used, and displayed them to the user. This then allowed us to remove any pairing cameras not needed for the final render and would output a new metadata file that stored this altered capture as a new capture that could be rendered with the new camera setup.

Filming took place over a week. Using the limited budget we had to ensure that we minimised travel and accommodation for actors where possible, utilising every day to capture all scenes for when actors were in. This meant planning which scenes were filmed each day and how we could split scenes up based on separate dialogues. After finishing filming, we were able to work on bringing the volumetric captures into a virtual world and building the environments around the actors.

Figure 1: An example setup for volumetric capture from the point of view of the viewer. POV is the experience viewer’s position. C1, C2, and C3 are the cameras used for capture. P1 and P2 are the cameras used for pairing. A1, A2, A3, and A4 are the actors within the scene.

3. Post-production of the Virtual Reality Experience

The final month of my placement involved bringing all the captures into Unity and working alongside a 3D graphics artist, Sumit Sarkar, and a sound artist, Gary Naylor, to bring the experience to life. Although we are still adding to and refining the experience, we have a complete draft and have begun screening this experience for audiences.  The screenshots below show examples of the experience using the captures. We are now in the process of preparing this experience to be screened at film festivals in the summer.

Figure 2: A screenshot taken from one of the scenes known as Refugee.
Figure 3: A screenshot taken from one of the scenes known as Raid.

The placement allowed me to work in situations and on projects I’ve never had the chance to work with and has been invaluable to my PhD experience. I want to give a huge thank you to Richard and Rachel for their work throughout the placement, as well as all others involved with the projects to help me get the best out of my placement at the VIP studio.

The GIFT project

Post by Harriet Cameron (2018 Cohort)

The GIFT Project is an international project funded by Horizon 2020, which explores new ways of engaging with cultural heritage through gifting . The scope of the project is huge, and draws together researchers, artists, designers and museum professionals from across Europe, including the University of Nottingham’s Mixed Reality Lab . GIFT has developed and deployed various experiences with museums in Denmark, Italy, Norway, Spain, Serbia, the UK and the USA since it began in 2017. It has several different ‘tracks’ within it, each of which explores different elements of gifting, interactivity and cultural experiences. For example The Gift Experience allows the user to choose objects or places within the museum; photograph them; personalise elements of it, for example with a written note or audio comment; and then gift it to someone to experience for themselves. Another example is The One Minute Experience, which uses templates and guidelines to enable visitors to write short texts about objects viewed in the museums, which they can then leave as gifts for other visitors. I was lucky enough to meet the wonderful Dr Jocelyn Spence, the lead Research Fellow at the University of Nottingham for the GIFT project and originator of VRtefacts (developed alongside the equally wonderful Dr Dimitrios Darzentas), early on in my PhD. Through her, I learned about the GIFT project and the amazing work they were doing.

My PhD project is working with the Nottingham Contemporary art gallery  to explore relationships between audience, art and venue, and how those relationships can be better understood and developed into something more long term, personal and meaningful, through the use of novel technologies. Naturally, the GIFT project offered a fantastic insight into some of the ways work like mine is already being undertaken, and a chance to see how this work is received by the public users. When I was given the opportunity to help with a two day deployment of the VRtefacts experience as part of the GIFT project, I was delighted to get on board.

In late May 2019, at the Derby Museum and Art Gallery  we showed VRtefacts to the public for the first time. The project, without spoiling anything for any reader who may yet get a chance to experience it themselves, used virtual reality (VR) to encourage visitors to donate personal stories to the Derby Museum. Enabled by a combination of tactile and digital technologies, and a beautiful VR environment created by Dr Dimitrios Darzentas, visitors were able to interact with artefacts in a thoroughly immersive and novel way. My role for the course of the deployment was to get the visitor settled into the VR environment, set the scene for their donation experience, and then to guide them through their storytelling. We heard from a broad array of people, who donated an even broader range of stories. From hypothesising what the artefact may have been used for, to memories of related objects and places, to tangential personal anecdotes and fictional hyperbole, we were gifted with some fantastic tales that added a resonant, human layer to the objects displayed. The value of this to the museum, the visitor, and the research project are multiple. For example, for the museum, it gave a new avenue to understanding their audiences, and the meanings they take from the exhibits shown. For the visitor, it allowed them a deeper way to engage with the exhibits, a space to reflect on their own experience or expertise, and a platform to share those reflections with others. Finally, in terms of research, it demonstrated a novel, exciting way of accessing audiences, as well as the importance of inter-disciplinary projects in contemporary research.

The future for museums and galleries comes, in part, in a technologically driven, interactive format, which enables visitors to experience not just the exhibits, but the museum experience as a whole in novel and exciting ways. VRtefacts is a timely and exhilarating glimpse at what future museum visiting may entail, and the feedback from the public who engaged with the project was overwhelmingly positive. By providing a way for visitors to interact with artefacts and exhibits in a tactile, personal way, it became apparent that each visitor had their own interpretations, reflections, and indeed stories for each piece, brought to the fore by the enoblement of the technologies involved, that they were excited to share with the museum as well as each other. Enabling the visitor to share their stories was not only well received by them, but also by the museum who were pleased to learn about the histories of each piece, or the personal relevance of the artefacts to the individual. VRtefacts represents one face of the future for museums and galleries, in which personalised interactivity forms an important part of the visitor experience.

On a more personal note, the project demonstrated just one way that technologies can be utilised to enable and encourage connections between visitors, cultural venues, and exhibits. Despite my involvement beginning late in the process, just a few weeks before the date of the intervention, I was delighted that my feedback on the human engagement element was integrated into the final experience, and it provided a valuable insight into how these kind of projects are developed and deployed in a museum setting. Running the experience also allowed me to revisit storytelling skills I had established during my time working at an escape room, and develop those skills in a new context. Most importantly I think, it gave me an insight into the practicalities of running an intervention; potential pitfalls and opportunities, the value of a strong team, and the importance of foresight (like bringing spares for your spares!). I’m looking forward to being involved in more projects like this in the future, learning more and offering more back, even at some point using these skills I have been developing to stage my own interactive experience within a cultural institution as part of my PhD.

VRtefacts Outreach at Derby Museum & Art Gallery

Post by Joseph Hubbard-Bailey (2016 Cohort)

The VRtefVRacts project provides museum and gallery visitors with the opportunity to hold and explore exhibit objects which they would otherwise just look at behind a literal or figurative red rope. Throughout the day, visitors from around the museum were invited to come and put on a VR headset, interact with some 3D-printed VR-augmented models of artefacts, and share their own story or commentary about the objects as they handled them. They then moved into another room for a short interview about the experience, allowing for the next participant to get started with the VR. While previous outreach events I’ve done have felt engaging and productive, none have been as interactive as this VRtefacts trial; others mostly involved having conversations across tables, and the distance and dynamic between researcher and participant felt similar to a campus-based study scenario. Due to the nature of this event, with participants engaging physically and narratively during the session, members of the public seemed much more a part of what was going on, as opposed to passive spectators.

For the visitors who chose to participate in the VRtefacts project, the experience served as both a novel sort of ‘exhibit’ in itself and a novel way to access preexisting materials in the museum’s collection. The latter seemed of particular value in the case of visitors who lived locally and so visited the museum often, offering an unexpected new level of access to familiar objects. The opportunity to contribute or “donate” a story as part of the VRtefacts experience may also have been particularly appealing to those who visit regularly and were keen to ‘give back’ to the museum. Several visitors did fall into this category of ‘regulars’, but there were also plenty of people who were passing through and popped in to pass the time. Visitors across both of these groups commented about how the decision to work with VRtefacts reflected well on Derby Museum, showing its openness to new ideas and resistance to stagnate. For those who were visiting the museum in groups, engaging with the VRtefacts exhibit seemed to provide a great source of interest and conversation, as they emerged and compared experiences. The fact that the corresponding artefacts themselves were available in the museum’s collection also meant that there was a comfortable transition back into the rest of the exhibit, as people could go and find the ‘real thing’ they had just encountered virtually.

Before I left the museum for the day, I sat down on the duct-taped-still chair and had my hairdo sabotaged by the VR headset so that I could have a go at the VRtefacts experience myself. I chose and inspected a small intricate model of a giant jet engine, turning it over and fumbling around the prickly detail of the gaskets while I tried to think of something clever to say for the camera. It reminded me of a frighteningly massive aircraft housed at the RAF Museum in Hendon, where I’d been for relentless school trips as a child due to its proximity to school grounds. I remember cowering through the awful hangar where the scary plane’s wings were so expansive that you had no option but to walk underneath them if you wanted to get out. While this wasn’t a pleasant experience, I think the physicality of being below the Vulcan — which I now know was not just a war plane, but a strategic nuclear bomber — came to mind during VRtefacts because it was a similar example of the power of perspective.

Image credit: Kenneth Griffiths (Ascension Island, 1982)

When an object is in a glass case or on a screen or behind a rope, I think we often instinctively revert to what I can only describe as a ‘flat’ perspective on it. We might press our noses to the glass as children to try and get a closer look, but the glass fogs up and we get told off, so eventually our curiosity wanes and we take a respectful cursory look instead. What this tired perspective gives us is often limited to two-dimensional factual information about the object of interest, without the weight and contour  and color of the object’s life. I’m very glad I decided to have a go with the VRtefacts pilot myself before I left the event, because it made me aware of how cowering under the expanse of the Vulcan’s wings taught me more about the gravity of war than any of my history lessons had. There is a narrative power in an artefact’s physicality which cannot be accessed by simply looking at it — the VRtefacts project has the potential to provide that physicality in a way that protects the original object, which needn’t even be on the same continent as it’s VR counterpart.

Beyond the benefit this technology could offer in enhancing the habitual gallery-goer’s usual experience, there is also potential benefit to those who aren’t so familiar and comfortable with these venues. Having come from a family who didn’t really go to museums or galleries, I still feel quite awkward and out of place in these spaces at times. I don’t think it’s much of a leap to suggest that projects like VRtefacts — which offer more diverse ways of accessing meaning in historical and art objects — have the potential to make galleries and museums not only more engaging for visitors, but more accessible to a diverse range of visitors.

Thanks to Jocelyn Spence and the rest of the VRtefacts team for letting me join in for the day!

VRtefacts is a pilot project developed within the European Union’s Horizon 2020 research and innovation programme under grant agreement No 727040, GIFT: Meaningful Personalization of Hybrid Virtual Museum Experiences Through Gifting and Appropriation.

–originally posted on Joe’s blog