Navigating Imaginary Landscapes: My Placement with Makers of Imaginary Worlds

post by Pavlos Panagiotidis (2022 cohort)

My placement with Makers of Imaginary Worlds took place in various locations around Nottingham and remotely.

Starting date: 25/06/2023
End Date: 25/09/2023

During the past summer, I had the opportunity to participate in a three-month placement with Makers of Imaginary Worlds, a company that combines art and technology in innovative ways to create experiences for children and families. I worked on a number of projects during my time there, which helped me gain a better understanding of the practical implications of engaging audiences in mixed reality experiences, as well as the potential for research in the intersection of HCI and performance.

During my placement, I was presented with several opportunities to work on projects that involved immersive technologies, approaching audience engagement, and experimenting with prototype technologies for performance. These projects, located in various parts of Nottingham, posed diverse challenges that made the experience exciting and solidified my interest in the intersection of art and technology. This placement helped me refocus my research objectives towards areas likely to have practical impacts. These areas include developing innovative methods to assess audience engagement through computer vision and creating methodologies to evaluate the aesthetic implications of emerging technologies in performance-making.

One project I worked on involved the qualitative analysis of interviews with visitors to the “Home Zero” art installation. This installation was designed to encourage participants, mainly children and families, to envision a more sustainable future through a playful, interactive experience that used paintings from the National Gallery as a starting point. I cleaned the data and performed a preliminary analysis of interview transcripts to study how audiences interacted with and perceived the installation. My analysis showed evidence that visitors enjoyed engaging with tangible interfaces and hands-on interactions, which made the experience more engaging and effectively supported the learning process. Later that year, I co-authored a paper that transformed some preliminary insights into a study on the significance of tangibility in designing mixed reality experiences about environmental sustainability for children. I also had the opportunity to contribute to another academic paper based on “Home Zero”, which aims to provide bridges between the disciplines of theatre and computer science, exploring how these fields can converge to enhance participatory design.

An example of an interesting field observation was when a child participant in HomeZero used the “Imagination Scanner,” a device that supposedly measured the participant’s imaginative capacity. The child’s excitement was palpable when they scored higher than their parents, and the automated system rewarded them by opening the door to the next part of the installation. This moment highlighted how design and technology could invert typical familial hierarchies, providing a unique and empowering experience for the children involved.

During my placement, I also had the opportunity to engage closely with “The Delights,” an event that blended dance, sensory activities, and interactive installations to captivate its young audience at the Hoopla Festival, which was held in Nottingham’s local parks. My role involved interviewing families to document their experiences and synthesising this information into a detailed report for stakeholders such as the festival committee. This report not only showcased the high level of audience engagement but also underscored the event’s impact on community connection, child development, and the creative transformation of public spaces. I gained valuable experience in the process required by funders to collect and analyse data and report the outcomes of publicly funded events to justify subsidising an art-making company.

Evidently, the event transformed perceptions of local parks from mere recreational spaces to vibrant community hubs that facilitate child development, artistic expression, and community bonding. Interviews with parents revealed significant shifts in how these spaces are viewed and utilised, emphasising the parks’ new roles as sites for creative and interactive family engagement. Notably, parents appreciated how the event encouraged their children’s expressive skills and social interactions, with many noting increased confidence and communication in their children due to participating in the activities offered. The experience showed me the importance of audience insights in designing experiences. Understanding audience behaviour, expectations, and engagement can be crucial in creating successful events. My placement’s most technically challenging aspect was working on a computer vision-based audience participation assessing prototype. This project aimed to collect and analyse data regarding audience behaviour in interactive installations and explore the possibilities of using computer vision technology to refine interactive artistic experiences.

During my placement at MOIW, I gained a deeper understanding of how my backgrounds in theatre, physics, and computer science synergistically apply to mixed reality experiences. The diverse approaches include assessing audience engagement, designing for optimal user experiences, performing qualitative and quantitative data analysis, and exploring the potential of physical and technological prototypes in performance. While being a “jack of all trades and master of none” can pose challenges in pinpointing one’s exact skills, it also allows for unique involvement and contribution to artistic projects.

Further reflecting on interdisciplinary approaches, I recognised that while the potential for convergence between computer science and theatre is evident, the independent evolution of these disciplines has occasionally made collaboration challenging. However, this placement reinforced my belief in the value of interdisciplinary research and the potential to bridge gaps between these fields, making designing each mixed reality performance a valuable step toward this integration.

In general, my placement with Makers of Imaginary Worlds was a valuable experience that enhanced my understanding of immersive technologies and audience engagement in a real-world setting. It solidified my commitment to exploring the intersection of art and technology, paving the way for my future work in the field. Thanks to my placement, I developed a deeper understanding of the intersection of HCI and performance, both academically and practically. I learned that collaboration and interdisciplinary research are crucial in creating and studying mixed reality events. Mixed reality requires a blend of skills and knowledge, including art, technology, and design. Therefore, processes that support interdisciplinary collaboration are essential in creating innovative mixed-reality experiences.

The Intersection of AI and Animal Welfare in Cat Royale: A Reflection on Public Engagement as a Computer Vision Expert

post by Keerthy Kusumam (2017 cohort)

Recently, I had the chance to be a part of an interview video that focused on my role as a computer vision expert in the upcoming project, Cat Royale, developed by Blast Theory. This project aims to explore the impact of AI on animals and specifically, cats. As a computer vision expert, I was thrilled to share my work and knowledge with the audience.

Reflecting back on the experience, I realize that my main aim for the video was to educate the public about the use of computer vision technology in animal welfare. The field of animal welfare has always been close to my heart, and I saw this opportunity as a way to demonstrate the impact that technology can have in this area. The Cat Royale project is a unique and creative way to showcase the application of computer vision technology in animal welfare, and I wanted to highlight this aspect of the project in the video.

The target audience for the video was the general public with an interest in technology, AI, and animal welfare. To reach this audience, I had to consider and adapt my language and presentation to suit their level of understanding and interest. I broke down the concept of computer vision technology and its application in the Cat Royale project into simple terms that could be easily understood by everyone. I also emphasized the importance of involving experts in animal welfare in the design of the project to ensure the comfort and safety of the cats.

In the video, I discussed how the computer vision system in Cat Royale measures the happiness of the cats and learns how to improve it. I highlighted the unique design of the utopia created for the cats, where their every need is catered for, and how the computer vision system understands the activities to make the cats happier. I explained that the ultimate goal of the project was to demonstrate the potential of computer vision technology in improving animal welfare.

One of the biggest challenges I faced in the video was ensuring that I provided enough technical detail for the audience to understand the concept of computer vision technology, while also keeping it simple enough for a general audience to grasp. To achieve this balance, I used analogies and examples that related to the audience’s everyday lives, making it easier for them to understand the concept.

It is important to note that people often assume that the computer vision system makes decisions about the happiness state of the cats. However, this is not the case. In fact, it is the cat experts who identify a list of behaviours that show the happiness state of the cats. The computer vision system can then reliably detect these behaviours, which inform the happy or not happy state of the cat.

In conclusion, the interview video was a great opportunity for me to share my work and knowledge with a wider audience and to spread awareness about the exciting possibilities of computer vision technology in the field of animal welfare. The Cat Royale project is a unique and creative way to showcase the application of computer vision technology in animal welfare, and I was thrilled to be a part of it. The experience has also given me a new perspective on the importance of adapting my presentation to suit my audience and ensuring that my message is effectively communicated.

Publishing Conference Paper – A valuable experience

post by Keerthy Kusumam (2017 cohort)

I published my conference paper, ”Unsupervised face manipulation via hallucination” in the International Conference on Pattern Recognition. The paper focused on a generative computer vision method to alter the pose and expression of a facial image in an unsupervised manner. I spent several months conducting experiments, analyzing the results, and discussing our findings. I received valuable feedback from my supervisors, which helped us to improve the quality of our work.

After the initial submission, I received comments from reviewers who provided suggestions for revisions. I took these comments into consideration and worked hard to make the necessary changes. This process was challenging as well as rewarding in the end. The paper was accepted to be presented as an oral presentation at the conference. The reception of our paper was quite positive and received several questions and comments from attendees. This was a valuable opportunity for me to network and receive feedback.

The motivation behind writing my conference paper was to explore the current state of face manipulation technology and to identify potential future directions for research in this area. As a 2nd year PhD student, I wanted to demonstrate my knowledge and understanding of the field, as well as contribute to the use of generative AI in face manipulation tasks. My main objective was to present a comprehensive overview of the current state of the field and to identify areas that could benefit from further research, especially behavioural monitoring in affective computing. In these areas, data is limited, and the use of generative AI can synthesize realistic data for further analysis.

I approached the research process by first conducting a thorough literature review to understand the current state of face manipulation technology and to identify gaps in the current research. I then used various research methods, such as conducting interviews with experts in the field and collecting data from various sources, such as academic journals, conference proceedings, and online forums. I also conducted experiments to validate some of my findings.

My key findings showed that the field of face manipulation is rapidly advancing and that there are many promising areas for future research. I discovered that there are various technical and ethical challenges that must be addressed to ensure that face manipulation technology is used responsibly. These findings impacted my original objectives by reinforcing the need for further research in this area and by highlighting the importance of responsible development and use of face manipulation technology.

I presented my research in the conference paper using a clear and concise writing style, and by using various visual aids, such as diagrams, graphs, and tables, to help illustrate my points. I also used a logical structure, with clear introductions, conclusions, and recommendations, to ensure that my ideas were easily understood by the conference audience. I also made sure to clearly state my findings and to provide context for each of the points I was making. The contributions were accompanied by experimental evidence.

One of the main challenges I faced while writing the conference paper was ensuring that my research was comprehensive and up-to-date. To overcome this, I made sure to regularly consult with my supervisors and to gather feedback from my peers. I also took the time to review relevant literature and to stay informed about the latest developments in the field.

As a result of writing the conference paper, my understanding of the topic of generative computer vision methods has deepened, and I have gained a better appreciation for the complex and rapidly evolving nature of this field. I have also gained a deeper understanding of the technical and ethical challenges that must be addressed to ensure responsible development and use of face manipulation technology.

The feedback I received from the conference audience was quite positive. Many attendees commented on the comprehensiveness of the research. Some attendees suggested areas for further research, which I have since incorporated into my future plans, especially in using this method to anonymize face datasets.

Overall, my conference paper on unsupervised face manipulation via hallucination was a valuable experience that allowed me to contribute to the field of generative computer vision and gain valuable insights into the complex nature of this field. The research process allowed me to deepen my understanding of the technical and ethical challenges that must be addressed in order to ensure responsible development and use of face manipulation technology.

Paper reflection – Articulating Soma Experiences using Trajectories

post by Feng Zhou (2017 cohort)

Somaesthetics combines the term ‘soma’ with ‘aesthetics’. The concept of ‘soma’ is predicated on the interconnectedness of mind, body, emotion and social engagement, considering all to be inseparable aspects that together form an embodied, holistic subjectivity. Aesthetics here refers to the ways in which we perceive and interact with the world around us. Somaesthetics is a widely used methodology for user study, which plays a significant role in my PhD research. Researchers who have focused on the research of Somaesthetics for many years and have published a number of prominent papers from the Royal Institute of Technology Stockholm visited the Mixed Reality Lab (MRL) of the University of Nottingham (where I am based) and collaborated with researchers here to run workshops on Somaesthetics. It was an excellent chance for me to learn Somaesthetics deeply through the workshop and explore the application of this methodology to my research.

Researchers were split into four groups to explore different applications. The group I was involved in was to explore the boundaries between humans and technology. The skin is traditionally seen as being a critical boundary of the body and one way of defining the bodily self. We can see, i.e. perceive with our eyes, our “external”, fleshy body – our moving limbs and parts, and our skin as the boundary between our “external” and “internal” body –our organs, cells, muscles etc., – which we cannot see, but instead feel or imagine. However, the boundary may be considered malleable. Take the example of a prosthesis – is this a part of our body or a separate piece of technology?

We attached cloth straps to the dancer’s calf and thigh so other members of the team could control them. Participants had to imagine a limb that had a ‘mind of its own’ – an exploration of dance where a part of one’s body was separated from control. The dance experience became one of negotiating control with one’s own body. This could serve as a conceptual stand-in for novice kinaesthetic skills where one’s body is unable to do what is asked – perhaps lacking the range of motion needed. But this was beyond being simply unable to perform the controlled limb; it actually became a separate performer in its own right, creating an intriguing partnership with a part of one’s own body, and encouraging the dancer to question the boundaries of their body and soma.

As we began to dance, our bodies behaved as we expected and we were unfettered. As our group began to take control of our limbs, we lost some agency over our bodies. The external influence started to exert itself in such a way that it restrained us, or actively pulled us. We were no longer ‘at one’ with our own bodies – rather those who controlled our limbs shared control with us. Over time as we learned how to work together, that action could even be considered a part of us (at least as far as the experience goes). It should be noted that the group members pulling on the straps were a stand-in for a ‘disobedient’ prosthesis – so the notion of it becoming part of us, or perhaps beginning as part of us, separating from us and returning might be more tightly aligned to our own body than the group experience – nevertheless the group does have access and licence to control our limbs.

This workshop was one of the user studies to support our final paper. Questioning the boundaries between humans and technology also invites reflection on the boundary between ‘inside’ and ‘outside’: separated by the skin, breathing in and out, ingesting and excreting. Thinking through these boundaries allows designers to redefine them, and thus challenge not only where the soma begins and where it ends, but also where the boundaries of experience lie. This turned out to significantly support my user workshop with disabled dancers to personalise their prostheses.

My job for the final paper was mainly to describe the activity I was involved in during the workshop. This was a precious experience to learn to write a paper collaboratively with many authors. Our final paper has 14 authors from the Royal Institute of Technology Stockholm and Mixed Reality Lab. Each of us wrote a specific part of the paper on Overleaf. We also have regular meetings to discuss writing up issues. This was also the time l started to learn Latex, which helped a lot in my left writing up on papers and thesis.

Measure and track your mood with smart clothes

post by Marie Dilworth (2017 cohort)

Have you ever thought about what it would be like to wear a t-shirt that measured your emotions and your mood?

One day this might be a reality!

We are running an online survey to understand what people think about emotion-tracking smart clothing.

We would love to know what you think about the idea.

If you can it will take 10-15 minutes to fill out this survey to support PhD Research.

This research is being run by:

  • University of Nottingham, School of Computer Science and
  • Nottingham Biomedical Research Centre, Mental Health Technology

Survey:
https://nottingham.onlinesurveys.ac.uk/would-you-wear-mood-measuring-smart-clothes

 

Thank you for giving your time to support mental health technology research!

Marie Dilworth
PhD Candidate
School of Computer Science
University of Nottingham

 

https://www.nottingham.ac.uk/research/groups/mixedrealitylab/

https://nottinghambrc.nihr.ac.uk/research/mental-health