PhD researcher Christian Tamakloe (2016 cohort) is currently recruiting participants to take part in a study to help understand what preparation activities and behaviours result in better travel journeys.
As part of research into the use of personal data in improving the rail passenger experience, I am currently inviting individuals travelling on the train this month (December) to trial a proposed travel companion app aimed at helping rail travellers prepare for how they spend their time during journeys.
The app includes features such as travel information and reminders, as well as records of previous trip experiences.
Participants will be required to use the app for their upcoming trip, after which they will have to complete a short questionnaire to share their thoughts about the app.
The study is open to anyone above the age of 18 years with some experience of rail travel in the UK. In addition, you will need to be travelling before the 20th of December, 2020.
PhD researcher Matthew Yates (2018 cohort) is currently recruiting participants to take part in a short online study on detecting fake aerial images. Generative Adversarial Networks (GANs) have been used to create these images.
Hello. I am 3rd year Horizon CDT PhD student partnered with the Dstl. My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.
I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (Satellite aerial images) or GAN-generated images.
Purpose:To assess the difficulty in the task of distinguishing GAN generated fake images from real satellite photos of rural and urban environments. This is part of a larger PhD project looking at the generation and detection of fake earth observation data.
Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.
Commitment: The study should take between 5-15 minutes to complete and is hosted online on pavlovia.org
How to participate? Read through this Information sheet and follow the link to the study at the end.
Feel free to contact me with any queries. Matthew.Yates1@nottingham.ac.uk
My first summer school started with an invite via email. Despite my interest in the topic, my first thought was that robotics was not my area of expertise (coming from a social science background), so maybe I shouldn’t bother applying as I’d be out-of-my-depth. Although after some consideration, I thought it would create some great opportunities to meet new people from diverse backgrounds. So, I stopped worrying about my lack of knowledge in the area and just went for it; and I got a place!
The summer school was held digitally due to COVID-19 restrictions, which had both its benefits and pitfalls. On the first day, we were welcomed by Debra Fearnshaw and Professor Steve Benford, and were then given the opportunity to introduce ourselves. From this it was apparent that there was a wide variety of delegates from several universities, with a range of disciplines including social sciences, robotics, engineering and manufacturing. The first day mostly consisted of talks from experts about the challenges we face in connecting technology and the potential of co-robotics within the fields of agrirobotics, home and healthcare. The main task of the summer school was to create a cobot (collaborative robot) that could overcome some of the issues that COVID-19 has created or exacerbated. The issue that the group chose to address had to fall into one of the categories introduced on the first day: food production (agrirobotics), healthcare or home. Along with this challenge, more details were needed on function, technological components, and four key areas of the cobot design: ethics, communication, learning and safety. These aspects were introduced on the second day. After being split into groups at the end of the first day, I felt happy as my group had a range of experience and expertise between us, which I felt would bode well for the challenge as well as being beneficial for myself as I could learn something from everyone.
Similarly, the second day consisted mostly of talks, this time based on the four themes mentioned previously. The ethics discussion was interesting and included in-depth explanations around aspects to consider when reflecting upon the ethical consequences of our designs, such as privacy, law, security and personal ethics. An online activity followed the ethics talk but was soon interrupted by a technical glitch. Despite this, we were able to engage with alternative resources provided in order to reflect upon the ethics of our cobot design. This was useful both for our eventual design, as well as applying this to our own PhD research.
The other themes then followed, including a discussion around interaction and communication in technology. This was an insightful introduction to voice user interfaces and alike, and what the current research is focusing on in this field. While fascinating on its own, it was also useful in thinking about how to apply this to our cobot design, and which features may be useful or necessary for our cobot’s functionality. A talk on the third theme of learning was then delivered, including details about facial recognition and machine learning, and the applications of these in the field of robotics. Likewise, this was useful in reflecting upon how these features may be applicable in our design. Finally, the theme of safety was considered. This talk provided us with the knowledge and ability to consider safety aspects of our cobot, which was particularly apt when considering COVID safety implications too. Overall, the first two days were quite lengthy in terms of screen time (despite some breaks), and I found myself wilting slightly towards the end. However, I think we could all understand and sympathise in the difficulty of minimising screen time when there is a short space of time to complete all of the summer school activities.
On the final day, we split into our teams to create our cobot. This day was personally my favourite part of the summer school, as it was fantastic to work with such a variety of people who all brought different skills to the group. Together, we developed a cobot design and went through the themes from the previous day, ensuring we met the design brief and covered all bases. Probably the biggest challenge was keeping it simple, as we had so many ideas between us. Despite our abundance of ideas, we were strict with ourselves as a group to focus and keep the design simplistic. Additionally, the five-minute presentation time meant that we had to keep our design simple yet effective. We then presented our home assistant cobot, Squishy. Squishy was an inflatable, soft cobot designed to assist carers in lifting patients who were bed-bound (as occupational injuries are a significant problem within the care industry). Squishy’s soft design enabled comfort for the patient being lifted, while the modular design provided a cost-effective solution and the possibility of added-extras if necessary. Along with this, Squishy was beneficial in that it consisted of wipe-clean surfaces to enable effective cleaning in light of COVID-19, as well as aiding social distancing by reducing the need for carer-patient contact. Other features of Squishy included machine-learned skeletal tracking and thermal cameras to aid safe functionality, and minimal personal data collection to maintain ethical standards. After the presentations and following questions, the judges deliberated. Results were in…my team were the winners! While I was happy to have won with my team, the most fruitful part of the experience for me was meeting and learning from others who had different backgrounds, perceptions and ideas.
Overall, I felt the summer school was well-organised and a fantastic opportunity to work with new people from diverse backgrounds, and I was very glad to be a part of it. I’m also pleased I overcame the ‘Imposter Syndrome’ feeling of not believing I would know enough or have enough experience to be a valuable delegate in the summer school. So, my advice to all students would be: don’t underestimate what you can contribute, don’t overthink it, and just go for it; you might end up winning!
Say hello to Squishy initially inspired by Baymax (Hall, D., Williams, C. 2014). This COBOT concept was co-created during an intensive online Summer School in July 2020 run jointly by Connected Everything and the Smart Products Beacon at the University of Nottingham.
The online event, running over two and a half days, involved 28 delegates from various UK universities and culminated in a brief to design a COVID-ready COBOT (collaborative robot) to work in either Food Production, Healthcare, or the Home. Squishy was the collaborative brainchild of myself and the other five members of my group – the BOTtom Wipers… The group comprised me and Cecily from the 2019 cohort at Horizon CDT and Laurence, Hector, Siya and Robin from Lincoln, Strathclyde, and Edinburgh/Heriot-Watt universities, respectively.
The day and a half leading up to the design brief set the context through a series of related talks on the challenges of working in the different sectors as well as discussions on core aspects such as Ethics, Interaction and Comms, Learning and Safety. Hence by Friday morning, we were ready for our design challenge – to design a COBOT relevant to the COVID world we currently live in and present the concept in five slides lasting five minutes – and to achieve this by mid-afternoon the same day!
Our group quickly worked out to make the most of our individual and different backgrounds ranging from robotics and machine learning to neuroscience and psychology. The challenge we decided on was situated in the home, lifting bed-bound residents since it places considerable physical strain on carers and requires close contact with individuals; obviously less than ideal in a COVID world.
Our solution was Squishy: a cost-effective assistive COBOT inspired by the fictional superhero Baymax (Hall, D., Williams, C. 2014) and the caterpillar robot made using a 3D printer that could output soft, rubbery material and hard material simultaneously (Umedachi, T., Shimizu, M., & Kawahara, Y., 2019).
We decided on a soft, modular COBOT since we felt this would be more comforting and comfortable for the individuals being lifted. Manufacturing costs can limit access to assistive robots so Squishy was inflated using pressurized air with different air pockets allowing his shape to be modified to suit individuals of different body sizes/shapes. To ensure stability and safety as well as hygiene, we chose a two-body system comprising flexible 3D printed silicon moulds overlaid with wipe clean textile. Being able to keep Squishy clean was critical given COVID.
Our next challenge was to ensure that Squishy could lift and put down an individual safely. We decided to use input from thermal cameras and real-time skeleton tracking using OpenPose since this is a relatively straightforward and cost-effective system. We planned to teach Squishy to hold and lift safely via incremental learning of holding/lifting varied body shapes and weights, either from data sets or by imitation. The use of thermal cameras and skeleton tracking also allowed us to provide two additional modules if required. The first option was temperature screening (37.8 degrees Celsius or greater potentially indicating COVID infection) and the second was for Squishy to gently rock the individual to comfort them if required. A rocking motion has been shown to promote sleep in infants and, more recently, also in adults, (Perrault et al., 2019).
For ease of use and safety we deliberately kept the input and output communications simple namely a wearable control bracelet or necklace with buttons for basic functions e.g. lift up/down as well as an emergency stop button which would signal that assistance was required.
Ethical issues were key, both in terms of the collection and storage of personal data but also the psychological aspects of Squishy interacting with humans. We decided to only collect the minimum personal data required for a safe and comfortable interaction such as height, weight and BMI (which could be combined with skeleton tracking data) with the individual requiring assistance only being identified by a unique identifier. Data would be stored in a safe storage system such as Databox. Databox is an EPSRC project involving collaborators from Queen Mary University of London, the University of Cambridge and the University of Nottingham and a platform for managing secure access to data. All our data processes were GDPR compliant.
The individual’s response to and relationship with Squishy was also central to the design both in terms of the COBOT’s appearance, feel and touch and the use of slow, comfortable movements which engender relaxation and trust.
Having discussed and honed our design ideas we then had to consolidate them into five slides and a five-minute presentation! We were each involved in different aspects of the brief following which we collectively refined the slides until we had the final version. Getting across the key elements in five minutes proved to be a challenge; with our first run-through coming in at closer to seven and a half minutes but on the day, we just managed to finish on time. It was interesting to see how many people really struggled with the time challenge, and I am sure my experience at summer school will be useful for when I enter the Three Minute Thesis (3MT®) in 2021…
And the outcome of all this hard work and collaboration? I am delighted to report that The BOTtom Wipers and Squishy won the COBOT challenge. J
References:
Hall, D., Williams, C., 2014. Big Hero 6 (Film). Walt Disney Animation Studios.
Perrault, A. A., Khani, A., Quairiaux, C., Kompotis, K., Franken, P., Muhlethaler, M., Schwartz, S., & Bayer, L. (2019). Whole-Night Continuous Rocking Entrains Spontaneous Neural Oscillations with Benefits for Sleep and Memory. Current Biology, 29(3), 402-411.e403. https://doi.org/https://doi.org/10.1016/j.cub.2018.12.028
Umedachi, T., Shimizu, M., & Kawahara, Y. (2019). Caterpillar-Inspired Crawling Robot Using Both Compression and Bending Deformations. IEEE Robotics and Automation Letters, 4(2), 670-676. https://doi.org/10.1109/LRA.2019.2893438
As we all know, the current pandemic has banned mass gatherings for the foreseeable future. This has meant cancelling academic conferences or trying to deliver them online. When submitting for The British Academy of Management (BAM) conference in February 2020, I was looking forward to an exciting trip to Manchester involving lots of networking with fellow PhD researchers and more established academics. But, as it happens, the virus is still spreading amongst us, and instead of a nice Manchester-stay and sophisticated conversations over fancy conference meals, I was stuck in my study in front of a computer screen for four full days, detaching myself from my laptop only to grab a quick bite to eat.
The postponement of the live conference until 2021 was initially somewhat of a disappointment. But when the time arrived, however, (and perhaps motivated by the long-lagging six weeks of school holidays), I was, in fact, quite excited to usher my child back to school and start engaging with the online event. I was hoping that the symposium could offer me an opportunity to meet other early career researchers specialised in entrepreneurship and/or cultural and creative industries, and also, as my professional background is in event management, I was keen to see how an event that traditionally takes place in a live setting can be adapted into an online environment.
For those of you who are not familiar with BAM (founded 1986), they are the leading authority on the academic field of management in the UK, with a mission of supporting and representing scholars, as well as engaging with the international business and management community. BAM has over 2000 members, out of whom 25% are located outside the UK, and 30% are PhD students. They publish two academic journals; the British Journal of Management and the European Journal of Management Research, and every year they organise the BAM conference, which usually gathers around 900 delegates worldwide.
This blog post is about the doctoral symposium, which kicked off the conference week with a one-day event and was followed by the three-day main conference. I have two focuses with this post. First, I will highlight some key moments from the symposium, which I hope can be useful for other postgraduate researchers. Second, I will discuss the online conference experience from a participant’s as well as an event manager’s perspective.
Symposium highlights
The BAM Doctoral Symposium brought together over 160 participants from all over the world for a full day of Zoom sessions aimed at postgraduate researchers. In the welcome speech the organisers stressed that thanks to the online delivery of the event, the event had attracted more international participants than in the previous years. This was most likely due to the online conference format being significantly more affordable with a lower conference fee and lack of travel costs. The symposium was fully conducted on Zoom, and the programme was split into sessions aimed at both early and later stage PhD students, meaning that we could decide ourselves which sessions were the best suited for our needs.
The first highlight of the day was a session aimed at early-stage PhD students, entitled “Conducting a literature review” and led by Prof David Denyer & Dr Colin Pilbeam. Although I’m already starting the 3rd year of my PhD, I have recently been revising my literature review (a never-ending task, I guess…), and therefore decided to join this session. The presenters began by stressing that it’s fundamental to get the literature review right, as otherwise is very hard to justify the research question. Next, they moved on to discuss, among other things, the common traps PhD students fall into when writing a literature review. Out of the seven traps, I found that “the broad and unfocused review question” trap resonated strongly with me. I couldn’t help thinking back to the numerous hours I’ve spent reformulating my review questions in the past year. Below is a useful example of a question that lacks focus and needs narrowing down.
What was particularly good about this session, however, was the presenters’ use of the recently launched Mentimeter software, which enabled them to conduct live polls during the session. Mentimeter has also a free plan that is suitable, for example, for students, and even some of the premium plans are quite affordable. I’m definitely on trying it out with my next online presentation.
The second highlight of the symposium relates to the perhaps most dreaded question amongst PhD students. I guess each of us is at least once during the PhD journey faced with the question “What is your contribution to theory?” In fact, this happened to me already in my first annual review, and hoping to provide a better answer next time, I signed up for Professor Ashley Braganza’s session entitled “Exploring Theory as a Lens for Research”.
Professor Braganza began his presentation by underlining the relevance of the unit of analysis to doctoral studies. To be honest, I had never asked myself what the unit of analysis in my research was, but I do agree that the question seems highly pertinent and can guide you in the right direction when selecting your theoretical framework.
In addition, he argued that when selecting a theory or theories, you should not go over three theories, but try using a maximum of one or two theories, and the rest can go under the research context. This was somewhat comforting to me, as it is a topic that I have struggled with over the summer while revising my literature review and trying to locate my conceptual home in the broad array of multidisciplinary literature. Again, I think that having too many theories to choose from is very likely to be a common issue for interdisciplinary researchers.
The final highlight of my symposium day was a paper presentation session, where selected PhD students were presenting their research, and were offered a chance to receive some feedback from the audience as well as a senior academic. I decided to watch two presentations in the entrepreneurship track, focusing on female and family entrepreneurship. It was very inspiring to hear of fellow PhD students’ work, and compare their journey to my own. I didn’t submit a paper to the doctoral symposium, as our developmental paper had been accepted for the main conference, and I found that being in the middle of my second phase of data collection wasn’t probably the best time to present findings from my research. I did end up regretting my decision though, and blaming my perfectionism for it, as in fact, it would have been totally fine to present some initial findings from my research. Nevertheless, it was reassuring to realise that others’ papers were nowhere near to be finished or polished pieces of writing. I guess the main advice I can give to other PhD students is that you shouldn’t hesitate to submit to doctoral symposiums, even if your work is still far from perfect. Luckily, I will still have a chance to do so next year.
The online experience
From a technological perspective, everything worked smoothly, and Zoom proved to be an adequate platform for hosting over 160 people for a one-day event. The larger sessions were conducted on Zoom webinar mode, meaning that cameras are only on for the speakers, while everyone else was muted and could asks question in the Q&A section. During the smaller sessions, we were able to have our cameras on and join the conversation if we wished to do so. However, Zoom is not a conference platform as such. This meant that every session had a different URL, and you had to constantly navigate between the PDF programme, where the video call links were, and the actual Zoom app. As I was also taking notes, and occasionally tweeting, the whole experience was quite laborious as it involved continuous switching from one browser page onto another depending on what I wanted to do. I do hope this pandemic and the increased demand for all-in-one virtual conference platforms will motivate companies to develop more user-friendly platforms that can handle large numbers of participants, as I haven’t yet been lucky enough to come across one.
To sum it up, a virtual conference simply cannot compete with the real thing – at least not yet. However, until technology develops enough to provide a smoother experience, it remains the best option we have for maintaining at least some aspects of normality. That said, the main difference between the online and live conference formats was undoubtedly the lack of ad hoc networking opportunities, those random conversations that in real life might take place during a coffee break, or over a conference meal. For instance, we were able to ask questions during presentations on the Zoom Q&A section, but there were no real opportunities to engage in a conversation with fellow PhD candidates.
It is evident that spontaneous digital engagement between people who have never met each other is extremely difficult to achieve in a virtual setting, as it doesn’t occur naturally or without encouragement. However, I do realise that this is only the beginning of what might be an era of virtual events, and the technology for conducting this type of events is developing at a rapid pace. I think that it’s important also that event organisers take the risk of experimenting with different software and try out new ways of structuring their events. For example, and perhaps because of time restrictions, this event didn’t have any time allocated for networking, meaning that 160 participants didn’t have a real opportunity to meet each other. This was not the case during the actual BAM conference, where plenty of time was allocated to networking in custom-made virtual coffee rooms, but as it was not compulsory or moderated, only a handful of delegates made the effort of engaging with others.
From the virtual doctoral symposium experience, I think the main takeaway is that it sparked my appetite for the BAM live conference and introduced me to a like-minded, extremely inclusive research community. As my research basically sits in between arts and business, I remember that last year after participating in the Audience Research in the Arts conference, I did end up feeling a bit like an interdisciplinary alien. That’s why I’m happy to say that with the BAM conference this year I didn’t feel like that at all, on the contrary, I felt like the business and management community was (virtually) embracing me quite warmly. My fingers are tightly crossed in the hope that in 2021, I can take part in the doctoral symposium and BAM conference in person and I’ll promise to be very grateful for every little opportunity for small talk or any form of live conversation.
I hope you are all doing well and are well-supported during these trying times. This period has allowed me to reflect on two public engagement activities that I took part in last year. The activities involved two exhibitions where I had the opportunity to engage with the public presenting three Mixed Reality Lab (MRL) creations in two venues (MRL at CHI, 2019 and Halfway to the Future, 2019). The MRL creations that I presented were:
Touchomatic A touch-based two-player cooperative video game, in which each player holds a sensor stick with one hand and touches the hand of the other participant to control how low and high an airship can fly. Participants have to find the sweet spot to fly the airship to collect coins and not run out of gas.
Get Screwed A virtual reality (VR) experience that plays with notions of control, sensory misalignment, and vertigo. Participants are placed on top of a swivel chair which is rendered as a bolt by the VR interface and attempt to unscrew it by turning.
VR Playground A VR exploration of sensory misalignment, in which people use the motion of a swing to control their navigation through the virtual world. The VR interface couples the motion with movements in the game.
In the two venues, people from different backgrounds came to try the above experiences: researchers from the field of human-computer interaction (HCI), employees of technology companies, people who don’t work in technology-related fields, and people who accompanied attendees of the conference, from adults to children, all participated in the experiences.
People were interested not only in having the experience, but also in knowing more about the technological, creative, psychological, and other components of the creations. People wanted to know what technology was used and what happened in our minds that made it possible for the creation to integrate the virtual and physical. Given that people had different levels of expertise in the area, it was necessary to give explanations according to people’s needs. In a nutshell, the response of a professor will differ from that of a child.
Overall, I found the exhibitions a challenging task. There were difficult moments when people were not satisfied with my answers, and I had to find other ways of explaining the mechanics of these VR experiences. I found it very useful to relate my explanation to common knowledge through metaphor and analogy that people could easily make a connection with them.
It was a very enriching experience given that I had the opportunity to talk about interactive experiences, VR, and motion sensing with people from different backgrounds. Those topics are very different from the area of research on which my PhD is focused, which is on design ethnography and smart products.
In general, the experience was very fulfilling. I had the opportunity to share my time exchanging ideas with people and engage in thought-provoking conversations. It has taught me valuable skills related to speaking about and explaining my research with a diverse audience. I consider some positive outcomes of my participation in these public engagement activities to be:
The opportunity to practice critical thinking and think about the relevance of our research for the real world
Improvement of communication skills, as we are accustomed to talking to an academic audience, and this kind of experience gives the opportunity to practice with a non-academics
Building confidence as we have to be prepared to answer all kinds of questions on the spot and without prior preparation
Personal satisfaction from sharing with society part of the research that we conduct in the confined spaces of the lab
I hope that shortly we are able to take part in such open and diverse environments again.
Publishing a Paper about In-Person Interactions at a virtually held Conference (CHI PLAY)
The National Videogame Arcade (now National Videogame Museum) is a games festival-turned-cultural centre that celebrates games and the people who make, play and interact with them. While more traditional museums might now allow visitors to directly interact with their exhibits, the NVM encourages this direct interaction in an open, genuine way: “Games are for everybody” is one of the core values of the museum. Outside of being an educational and creative hub for everything games-related, it is also my PhD’s industry partner.
Before the NVA moved to Sheffield in September 2018 to become the NVM, I was lucky to join my research partner for most of their last month in Nottingham. Since my research is centred around exploring ideas of (self-)care through a justice-, collective- and games-informed lens (to paint a picture with broad strokes), I was very keen on figuring out how people in the NVA made sense of it, how they created meaning in their interactions with the space and others: To do so, I joined visitors during their “journey” through the NVA: I watched people play games, enjoy themselves (or get frustrated!) and share and make memories.
After carefully analysing the data (and a couple of busy months with other studies), I asked two of my fellow doctoral researchers if they would be interested in exploring the data once again and writing a paper together: Gisela Reyes-Cruz (my go-to-person for everything interaction and ethnomethodology-related!) and Harriet “Alfie” Cameron (who has a keen eye for everything museum- and power-structures-related).
Together, we wrote and submitted “Plastic Buttons, Complex People: An Ethnomethodology informed Ethnography of a Video Game Museum” to CHI PLAY 2020, where it got accepted!
The paper explores how people interact with each other through establishing practices in and around the playable exhibits, but also between each other. It finishes with some ideas and design implications that spaces that want to engage people in co-located play can take on to support or disrupt group interactions.
Here is our abstract as a teaser:
“This paper reports on an ethnomethodology-informed ethnography of a video game museum. Based on 4 weeks of ethnographic fieldwork, we showcase how groups of visitors to the museum achieved the interactional work necessary to play games together and organise a museum visit as a social unit. By showcasing and explication excerpts of museum visits we make the taken-for-granted nature of these interactions visible. Embedded within an activity map that outlines how people prepare, play, wind down and exit games, we showcase the sequential, temporal, and carefully negotiated character of these visits. Based on our findings and the resulting Machinery of Interaction, we propose three design implications for spaces that aim to exhibit video games and/or try to facilitate co-located, collective video gameplay.”
It is a weird feeling to present our first paper in a virtual format, but it is a tremendously joyful occasion (and a very enriching learning experience)!
This paper would not have been possible without the support of my supervision team Martin Flintham, Pat Brundell and David Murphy; the lovely and wonderful folks at the NVA/NVM, all of the visitors who took part in research and Stuart Reeves for valuable comments on one of the earlier paper drafts!
Hello everyone. I hope you are doing well and are well-supported during these trying times. This period has allowed me to reflect on two public engagement activities that I took part in last year. The activities involved two exhibitions where I had the opportunity to engage with the public presenting three Mixed Reality Lab (MRL) creations in two venues (MRL at CHI, 2019 and Halfway to the Future, 2019). The MRL creations that I presented were:
Touchomatic
A touch-based two-player cooperative video game, in which each player holds a sensor stick with one hand and touches the hand of the other participant to control how low and high an airship can fly. Participants have to find the sweet spot to fly the airship to collect coins and not run out of gas.
Get Screwed
A virtual reality (VR) experience that plays with notions of control, sensory misalignment, and vertigo. Participants are placed on top of a swivel chair which is rendered as a bolt by the VR interface and attempt to unscrew it by turning.
VR Playground
A VR exploration of sensory misalignment, in which people use the motion of a swing to control their navigation through the virtual world. The VR interface couples the motion with movements in the game.
In the two venues, people from different backgrounds came to try the above experiences: researchers from the field of human-computer interaction (HCI), employees of technology companies, people who don’t work in technology-related fields, and people who accompanied attendees of the conference, from adults to children, all participated in the experiences.
People were interested not only in having the experience, but also in knowing more about the technological, creative, psychological, and other components of the creations. People wanted to know what technology was used and what happened in our minds that made it possible for the creation to integrate the virtual and physical. Given that people had different levels of expertise in the area, it was necessary to give explanations according to people’s needs. In a nutshell, the response of a professor will differ from that of a child.
Overall, I found the exhibitions a challenging task. There were difficult moments when people were not satisfied with my answers, and I had to find other ways of explaining the mechanics of these VR experiences. I found it very useful to relate my explanation to common knowledge through metaphor and analogy that people could easily make a connection with them.
It was a very enriching experience given that I had the opportunity to talk about interactive experiences, VR, and motion sensing with people from different backgrounds. Those topics are very different from the area of research on which my PhD is focused, which is on design ethnography and smart products.
In general, the experience was very fulfilling. I had the opportunity to share my time exchanging ideas with people and engage in thought-provoking conversations. It has taught me valuable skills related to speaking about and explaining my research with a diverse audience. I consider some positive outcomes of my participation in these public engagement activities to be:
The opportunity to practice critical thinking and think about the relevance of our research for the real world
Improvement of communication skills, as we are accustomed to talking to an academic audience, and this kind of experience gives the opportunity to practice with a non-academics
Building confidence as we have to be prepared to answer all kinds of questions on the spot and without prior preparation
Personal satisfaction from sharing with society part of the research that we conduct in the confined spaces of the lab
I hope that shortly we are able to take part in such open and diverse environments again.
At the end of 2019, as a Horizon CDT student at Nottingham University, I was attending a workshop (called Computer Vision for Physiological Measurement) in Seoul, South Korea. This workshop mainly focused on the research of applying recent advances in computer vision to measure the human physiological status.
This year, there were more than 50 people from company or academic institutions attending this workshop and 19 of us were giving talks to share the research we did and discussing the potential future research directions.
During this workshop, I gave a talk about how to apply state-of-the-art machine learning techniques to automatically detect emotions from people’s faces. In particular, it utilised people’s facial muscle movements to infer emotional status. This technique can be further applied to other purposes, as facial dynamics can reflect many different human statuses.
More importantly, how to apply such techniques to benefit our daily life was also discussed. For example, it can be further extended to make a quick and objective judgment about someone’s mental health, such as depression, or predict someone’s personality. Specifically, fast and automatically understanding human personality is important in employment. It can help employers to better recognise which candidates are more suitable for the job, and more willing to work in a group. Mental healthcare is another potential application. For example, while it is expensive and time-consuming to find mental health experts to diagnose mental health, such a technique can provide a cheap, quick, and objective assessment to most patients as well as provide more useful information for related doctors.
In short, such techniques have great potential to improve the business and quality of our life. For investors, it could be a good direction to invest money and time.
Since this workshop was with ICCV conference, at the end of this event, I had a great time in the banquet and had nice chats with other attendees.
Hiiii everyone, it’s Harriet here. Hope you’re all doing well and finding ways to support yourselves and the folks around you. I’m going to share a few words about my experience of writing and presenting a paper at the Designing Interactive Systems (DIS) conference in this, the year of our undoing, 2020. Now I know you are all sick of reading about this so I’m going to get it out of the way early and then only reference it in thinly veiled metaphors where I absolutely have to. Obviously, when we first had our paper accepted into DIS, we weren’t expecting a pandemic to barrel in and seal off the opportunity to hop over to Eindhoven for the week, present our paper in person, and have a good ol’ chin wag with other researchers about our findings. So this blog post might be a little different given that chunks of it will be dedicated to navigating a virtual conference and the changes that have resulted from that.
So first of all, I’ll start you off with an introduction to the paper. I was a co-author on the paper with a number of other amazing and talented folks – Dr Jocelyn Spence (lead), Dr Dimitri Darzentas, Dr Yitong Huang, Eleanor Beestin, and Prof Steve Benford. It was about a project called VRtefacts [4], something I have written about previously[1]. The TL;DR for VRtefacts is that it was a fantastic project which came about as an offshoot of the GIFT project [2] – a series of international projects funded through Horizon 2020 that look at ways of using gifting to enhance cultural heritage experiences. VRtefacts used a combination of physical props and virtual reality to encourage visitors to a museum to donate personal stories inspired by a small selection of artefacts on display. Our paper explores how the manipulations and transitions embedded in VRtefacts can enable personal interpretation and enhance engagement through performative substitutional reality, as demonstrated through storytelling.
I first joined the squad because my background in human geography offers up a different approach to HCI analysis that can draw out themes of place, space, and identity in novel ways. For this research, we conducted thematic analysis on post-experience interviews and videos of participant stories captured in the deployment. I primarily focused on conducting a section of the analysis to examine how space and place were represented and understood throughout participants’ experiences. Through the different passes conducted for the thematic analysis [1], these loose concepts of space and place evolved into how physical distance and scale affected the experience, and how the transitions between different spaces and places – both physically and emotionally – impacted on the storytelling. At the same time as I was working on this, Jocelyn and Yitong were conducting their thematic analyses on the data to explore other concepts that came up like contextualisation of stories, attitudes towards the objects and the museum, and the influence of touch and visuals.
Working together like this was a really interesting experience. I’m familiar with NVivo [3] – the software widely used for this type of qualitative coding – having used it a few times before in my work. However, finding ways to navigate NVivo as a team – exploring how to compare notes, cross-reference emerging codes, and merge/condense/combine the codes that overlapped – offered a whole new challenge. The version of NVivo we had access to did not allow multi-party editing of one database and we were using different operating systems (which each have their own incompatible versions of the software), so we had to get slightly creative in just how we did team working. After some trial and error, we decided to each work on our own dataset and periodically combine them into one master document. Sometimes this meant having to compare the documents and painstakingly comb through them for wayward spaces and capitalisations just so that we could merge our files – a great joy to be sure. But we also regularly got together and went through our codes side-by-side with the other members of the team, deciding on how best to combine our efforts. By doing so, we essentially added a new kind of ‘pass’ per pass that sure, created extra work, but genuinely helped us to better understand and be able to justify not only our own codes but each other’s as well.
This was an approach that we also extended somewhat to the paper writing itself. We each branched off and wrote our own specialised sections, and then came back together to work on the overall flow and content. Across several iterations of the paper, we worked out what the core findings were and how best to present them, ultimately landing on performative substitutional reality as understood through manipulations (of physicality, visuals, and scale) and transitions (between spaces and through storytelling). On a personal note, it was really validating and exciting to see my contribution come to life and become such an integral part of the paper. It was also a brilliant first foray into paper writing – to have such a supportive and generous team to work with took large amounts of the panic away from ‘am I doing this right?’ and ‘how does all of this even smoosh together?!’ If you get the chance to work with others for your first paper-writing experience, I super duper recommend it. Especially for when it gets to the final details: formatting, submission, keywords etc etc etc, where I wouldn’t even have known where to begin without the (very) patient guidance of Jocelyn and Dimitri. For a whole host of reasons beyond the control of anyone, the paper came to its final form just a couple of hours before the submission deadline, with three of us sat on overleaf culling, and prodding, and spellchecking on the night of Brexit. The fireworks erupting in the distance just as we agreed it was done added a special kind of bathetic farcical atmosphere to the completion of my first paper.
The paper was accepted with only minor adjustments and we were off to Eindhoven. Except not really, because of “the event”. Instead, we were asked to put together a 10 minute video presentation which would be broadcast as part of the newly styled virtual DIS 2020. We divided the presentation up into chunks and Jocelyn, Dimitri and myself each took a few slides and narrated over them. Recording over presentations is a skill I haven’t had much reason to use since GCSE ICT, but increasingly it’s been becoming an essential skill, and one I am rapidly reacquainting with. You know. Because of “the issue”. Unfortunately, when the time for the conference itself came about, DIS wasn’t particularly interactive and presentations and papers were simply left online for people to interact with as they came across them. I did engage with the hashtag on Twitter regularly and found some new academics to follow, but aside from that, there isn’t much to say on the reception of the paper. I did, to the bemusement of my housemates, however, go rather overboard in the kitchen to make the most of the situation e.g. breaking into the last of my waffles, lovingly made according to the recipe of my fabulous friend’s Oma, to make the Dutch experience come to me. The ultimate power move.
Being involved in the GIFT project in the ways I have, but particularly from being part of VRtefacts, has completely changed certain paradigms through which I approach my PhD. Not only has it provided a grounded example of how integral donation can be as a framing device to bridge the gap between audiences and galleries, but it also offered me an amazing chance to practise multi-disciplinary writing which spanned both of my subject areas (HCI and human geography). I’ve already had opportunities to be involved in other parts of the GIFT project and we have also submitted an article to the HCI Journal special issue on time, exploring how manipulations of time and time-space contributed to the experience of VRtefacts. I’m looking forward to seeing what other opportunities come my way from being part of these papers and practising my shiny new paper-writing skills in the future.
[1] V. Braun and V. Clarke, “Using thematic analysis in psychology,” Qualitative Research in Psychology, vol. 3, no. 2, pp. 77-101, 2006/01/01 2006.
[2] GIFT Project. (2019). GIFT Project. Available: https://gifting.digital/ (Accessed: 8/5/2019)