I am excited to be working with Derek McAuley, James Pinchin and Dominic Price from Horizon on a Social Distancing (SoDis) research project. We aim to understand how individuals act when given information indicating concentrations of people, and thus busyness of places.
We are employing a privacy-preserving approach to the project data collected from mobile device WiFi probe signals’ data. With the permission of buildings’ managers and relevant Heads of Schools, the SoDis Counting Study will deploy WISEBoxes in a limited number of designated University buildings, gather the relevant data from the Cisco DNA Spaces platform, which the University has implemented across its Wi-Fi network, and undertake a gold standard human-count.
Essentially, WISEBoxes are a sensor platform developed as part of a previous Horizon project, WISEParks. These sensors count the number of Wi-Fi probe requests seen in a time-period (typically 5 minutes) from unique devices (as determined by MAC address). MAC addresses, which could be considered personally identifiable information, are only stored in memory on the WISEBox for the duration of the count (i.e. 5 minutes). The counts, along with some other metadata (signal intensities, timestamp, the WiFi frequency being monitored) are transmitted to a central server hosted on a University of Nottingham virtual machine. No personally identifiable information is permanently stored or recoverable.
We will have ‘safe access’ to Cisco DNA Spaces API, meaning MAC addresses and other identifiers will not be provided to the SoDis research team. The data we gather from Cisco DNA Spaces API will be processed to produce information similar to that gathered by the WISEBoxes, i.e. counts of number of unique users connected to an access point in a period of time.
To develop our ‘busyness’ models, we will also deploy human researchers to count people in designated buildings and spaces. This human-counting element will provide a gold standard for said buildings, at the time of counting. This gold standard can then be modelled against data simultaneously produced from WiFi signal counting methods, producing an estimated level of busyness.
With the help of several research assistants, we will collect 40 hours of human-counting data, illustrating building activity over a typical workweek. We expect to start this human-counting work in the School of Computer Science Building mid-January 2021.
This gold standard human-count will include both a door count and an internal building count. For each designated building, we will have researchers posted at the entrances and exits to undertake door counts. The door counters will tally numbers of people going in and numbers going out within 5-minute intervals using + and – signs. On each floor, researchers will count people occupying rooms and other spaces in the building (e.g., offices, labs, atrium, corridors). Each space will be labelled by room number or name on a tally sheet. Researchers will do two rounds of their assigned floor per hour, checking numbers of people occupying the various spaces. Different buildings will require different arrangements of researchers to enable an accurate count. For example, to cover a school building like Computer Science on Jubilee, we will have 6 researchers counting at any one time.
We expect some of the data collected from the WiFi probes and connections to be spurious (noise), however this is not of concern. Why? Well, to represent busyness, we do not need to worry about exact numbers.
It is accepted that the data may not be accurate, for example, someone’s device may use or send a WiFi probe signal to an access point (AP) or WISEBox in the designated building who is not actually in the building. This potential for inaccuracy is a recognised feature of the privacy-preserving approach we are taking to model busyness for the social distancing tool, SoDis. The researchers undertaking the human-counting study may miss the occasional person roaming the building, but this level of error is not of particular concern. When the human-count is triangulated with the sources of WiFi data, a model of busyness for that space will be produced.
The approach we are testing is relevant not only to our current desire to reduce infection from COVID-19 but may also prove useful to support other health and social causes.
Laurence Cliffe (2017 cohort) writes about how the design of analogue music equipment influenced the online interactive experiments in the Science and Media Museum‘s Sonic Futures project.
The practice of attempting to replicate historical analogue music equipment within the digital domain remains a popular and enduring trend. Some notable examples would include electric guitar tone studios, or digital amplifier simulators, virtual synthesisers and effects plugins for digital audio workstations such as Logic, GarageBand or Cubase.
As expected, such examples attempt to replicate the much-loved nuances of their analogue counterparts, whether that be the warmth of vintage valve amplification and magnetic tape saturation, or the unpredictable, imperfect and organic characteristics of hand-assembled and aged electronic circuitry.
Within the Sonic Futures online interactive exhibits we can hear the sonic artefacts of these hardware related characteristics presented to us within the digital domain of the web; the sudden crackle and hiss of a sound postcard beginning to play and the array of fantastic sounds that can be achieved with Nina Richards’ Echo Machine. After all, who would be without that multi-frequency, whooping gurgle sound you can create by rapidly adjusting the echo time during playback?
Within digital music technology, while this appetite for sonic nostalgia is interesting in itself, we can also see how this desire to digitally replicate an ‘authentic experience’ extends to the way in which these devices are visually represented, and how the musician, music producer or listener is directed to interact with them. Again, we see this in the Sonic Futures online interactive exhibits: the Sound Postcard Player with a visual design reminiscent of a 1950s portable record player; the Echo Machine’s visual appearance drawing upon the design of Roland’s seminal tape-based echo machine of the 1970s, the RE-201 or Space Echo; and Photophonic with its use of retro sci-fi inspired fonts and illustrations.
We can see even more acute examples of this within some of the other examples given earlier, such as virtual synthesisers and virtual guitar amplifiers, where features such as backlit VU meters (a staple of vintage recording studio equipment) along with patches of rust, paint chips, glowing valves, rotatable knobs, flickable switches and pushable buttons are often included and presented to us in a historically convincing way as an interface through which these devices are to be used.
This type of user-interface design is often referred to as skeuomorphism, and is prevalent within lots of digital environments; the trash icon on your computer’s desktop is a good example (and is often accompanied by the sound of a piece of crunched-up paper hitting the side of metallic bin). Skeuomorphism as a design-style tends to go in and out of fashion. You may perhaps notice the look and feel of your smartphone’s calculator changing through the course of various software updates from one that is to a lesser or greater degree skeuomorphic, to one that is more minimalist and graphical and often referred to as being of a flat design.
Of course, it is only fitting that the Sonic Futures virtual online exhibits seek to sonically and visually reflect the historical music technologies and periods with which they are so closely associated. At a point in time when we are all seeking to create authentic or realistic experiences within the digital domain, whether it be a virtual work meeting or a virtual social meetup with friends and relatives, using the visual and sonic cues of our physical realities within the digital domain reassures us and gives our experience a sense of authenticity.
Along with the perceived sonic authority of original hardware, another notable reason why skeuomorphic design has been so persistent within digital music technology can be explained by the interface heavy environment of the more traditional hardware-based music studio (think of the classic image of the music producer sitting behind a large mixing console with a vast array of faders, buttons, switches dials and level meters). When moving from this physical studio environment to a digital one, in order to facilitate learning, it made sense to make this new digital environment a familiar one.
Another possible contributing factor is the relative ease with which the digital versions can be put to use within modern music recording and producing environments: costing a fraction of the hardware versions and taking up no physical space, they can be pressed into action within bedroom studios across the globe. Perhaps this increased level of accessibility generates a self-perpetuating reverence for the original piece of hardware, which is inevitably expensive and hard to obtain, and therefore its visual representation within a digital environment serves as a desirable feature, an authenticating nod to an analogue ancestor.
There are, of course, exceptions to the rule. The digital audio workstation Ableton Live (along with some other DAWs and plugins) almost fully embraces a flat design aesthetic. This perhaps begs the question: what role, if any, does the realistic visual rendering of a piece of audio hardware play in its digital counterpart? What does it offer beyond the quality of the audio reproduction? From the perspective of a digital native (someone who has grown up in the digital age) its function as a way to communicate authenticity is thrown further into question and perhaps it is skeuomorphic design’s potential to communicate the history behind the technology that comes into focus.
PhD researcher Christian Tamakloe (2016 cohort) is currently recruiting participants to take part in a study to help understand what preparation activities and behaviours result in better travel journeys.
As part of research into the use of personal data in improving the rail passenger experience, I am currently inviting individuals travelling on the train this month (December) to trial a proposed travel companion app aimed at helping rail travellers prepare for how they spend their time during journeys.
The app includes features such as travel information and reminders, as well as records of previous trip experiences.
Participants will be required to use the app for their upcoming trip, after which they will have to complete a short questionnaire to share their thoughts about the app.
The study is open to anyone above the age of 18 years with some experience of rail travel in the UK. In addition, you will need to be travelling before the 20th of December, 2020.
PhD researcher Matthew Yates (2018 cohort) is currently recruiting participants to take part in a short online study on detecting fake aerial images. Generative Adversarial Networks (GANs) have been used to create these images.
Hello. I am 3rd year Horizon CDT PhD student partnered with the Dstl. My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.
I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (Satellite aerial images) or GAN-generated images.
Purpose:To assess the difficulty in the task of distinguishing GAN generated fake images from real satellite photos of rural and urban environments. This is part of a larger PhD project looking at the generation and detection of fake earth observation data.
Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.
Commitment: The study should take between 5-15 minutes to complete and is hosted online on pavlovia.org
How to participate? Read through this Information sheet and follow the link to the study at the end.
Feel free to contact me with any queries. Matthew.Yates1@nottingham.ac.uk
My first summer school started with an invite via email. Despite my interest in the topic, my first thought was that robotics was not my area of expertise (coming from a social science background), so maybe I shouldn’t bother applying as I’d be out-of-my-depth. Although after some consideration, I thought it would create some great opportunities to meet new people from diverse backgrounds. So, I stopped worrying about my lack of knowledge in the area and just went for it; and I got a place!
The summer school was held digitally due to COVID-19 restrictions, which had both its benefits and pitfalls. On the first day, we were welcomed by Debra Fearnshaw and Professor Steve Benford, and were then given the opportunity to introduce ourselves. From this it was apparent that there was a wide variety of delegates from several universities, with a range of disciplines including social sciences, robotics, engineering and manufacturing. The first day mostly consisted of talks from experts about the challenges we face in connecting technology and the potential of co-robotics within the fields of agrirobotics, home and healthcare. The main task of the summer school was to create a cobot (collaborative robot) that could overcome some of the issues that COVID-19 has created or exacerbated. The issue that the group chose to address had to fall into one of the categories introduced on the first day: food production (agrirobotics), healthcare or home. Along with this challenge, more details were needed on function, technological components, and four key areas of the cobot design: ethics, communication, learning and safety. These aspects were introduced on the second day. After being split into groups at the end of the first day, I felt happy as my group had a range of experience and expertise between us, which I felt would bode well for the challenge as well as being beneficial for myself as I could learn something from everyone.
Similarly, the second day consisted mostly of talks, this time based on the four themes mentioned previously. The ethics discussion was interesting and included in-depth explanations around aspects to consider when reflecting upon the ethical consequences of our designs, such as privacy, law, security and personal ethics. An online activity followed the ethics talk but was soon interrupted by a technical glitch. Despite this, we were able to engage with alternative resources provided in order to reflect upon the ethics of our cobot design. This was useful both for our eventual design, as well as applying this to our own PhD research.
The other themes then followed, including a discussion around interaction and communication in technology. This was an insightful introduction to voice user interfaces and alike, and what the current research is focusing on in this field. While fascinating on its own, it was also useful in thinking about how to apply this to our cobot design, and which features may be useful or necessary for our cobot’s functionality. A talk on the third theme of learning was then delivered, including details about facial recognition and machine learning, and the applications of these in the field of robotics. Likewise, this was useful in reflecting upon how these features may be applicable in our design. Finally, the theme of safety was considered. This talk provided us with the knowledge and ability to consider safety aspects of our cobot, which was particularly apt when considering COVID safety implications too. Overall, the first two days were quite lengthy in terms of screen time (despite some breaks), and I found myself wilting slightly towards the end. However, I think we could all understand and sympathise in the difficulty of minimising screen time when there is a short space of time to complete all of the summer school activities.
On the final day, we split into our teams to create our cobot. This day was personally my favourite part of the summer school, as it was fantastic to work with such a variety of people who all brought different skills to the group. Together, we developed a cobot design and went through the themes from the previous day, ensuring we met the design brief and covered all bases. Probably the biggest challenge was keeping it simple, as we had so many ideas between us. Despite our abundance of ideas, we were strict with ourselves as a group to focus and keep the design simplistic. Additionally, the five-minute presentation time meant that we had to keep our design simple yet effective. We then presented our home assistant cobot, Squishy. Squishy was an inflatable, soft cobot designed to assist carers in lifting patients who were bed-bound (as occupational injuries are a significant problem within the care industry). Squishy’s soft design enabled comfort for the patient being lifted, while the modular design provided a cost-effective solution and the possibility of added-extras if necessary. Along with this, Squishy was beneficial in that it consisted of wipe-clean surfaces to enable effective cleaning in light of COVID-19, as well as aiding social distancing by reducing the need for carer-patient contact. Other features of Squishy included machine-learned skeletal tracking and thermal cameras to aid safe functionality, and minimal personal data collection to maintain ethical standards. After the presentations and following questions, the judges deliberated. Results were in…my team were the winners! While I was happy to have won with my team, the most fruitful part of the experience for me was meeting and learning from others who had different backgrounds, perceptions and ideas.
Overall, I felt the summer school was well-organised and a fantastic opportunity to work with new people from diverse backgrounds, and I was very glad to be a part of it. I’m also pleased I overcame the ‘Imposter Syndrome’ feeling of not believing I would know enough or have enough experience to be a valuable delegate in the summer school. So, my advice to all students would be: don’t underestimate what you can contribute, don’t overthink it, and just go for it; you might end up winning!
The online event, running over two and a half days, involved 28 delegates from various UK universities and culminated in a brief to design a COVID-ready COBOT (collaborative robot) to work in either Food Production, Healthcare, or the Home. Squishy was the collaborative brainchild of myself and the other five members of my group – the BOTtom Wipers… The group comprised me and Cecily from the 2019 cohort at Horizon CDT and Laurence, Hector, Siya and Robin from Lincoln, Strathclyde, and Edinburgh/Heriot-Watt universities, respectively.
The day and a half leading up to the design brief set the context through a series of related talks on the challenges of working in the different sectors as well as discussions on core aspects such as Ethics, Interaction and Comms, Learning and Safety. Hence by Friday morning, we were ready for our design challenge – to design a COBOT relevant to the COVID world we currently live in and present the concept in five slides lasting five minutes – and to achieve this by mid-afternoon the same day!
Our group quickly worked out to make the most of our individual and different backgrounds ranging from robotics and machine learning to neuroscience and psychology. The challenge we decided on was situated in the home, lifting bed-bound residents since it places considerable physical strain on carers and requires close contact with individuals; obviously less than ideal in a COVID world.
Our solution was Squishy: a cost-effective assistive COBOT inspired by the fictional superhero Baymax (Hall, D., Williams, C. 2014) and the caterpillar robot made using a 3D printer that could output soft, rubbery material and hard material simultaneously (Umedachi, T., Shimizu, M., & Kawahara, Y., 2019).
We decided on a soft, modular COBOT since we felt this would be more comforting and comfortable for the individuals being lifted. Manufacturing costs can limit access to assistive robots so Squishy was inflated using pressurized air with different air pockets allowing his shape to be modified to suit individuals of different body sizes/shapes. To ensure stability and safety as well as hygiene, we chose a two-body system comprising flexible 3D printed silicon moulds overlaid with wipe clean textile. Being able to keep Squishy clean was critical given COVID.
Our next challenge was to ensure that Squishy could lift and put down an individual safely. We decided to use input from thermal cameras and real-time skeleton tracking using OpenPose since this is a relatively straightforward and cost-effective system. We planned to teach Squishy to hold and lift safely via incremental learning of holding/lifting varied body shapes and weights, either from data sets or by imitation. The use of thermal cameras and skeleton tracking also allowed us to provide two additional modules if required. The first option was temperature screening (37.8 degrees Celsius or greater potentially indicating COVID infection) and the second was for Squishy to gently rock the individual to comfort them if required. A rocking motion has been shown to promote sleep in infants and, more recently, also in adults, (Perrault et al., 2019).
For ease of use and safety we deliberately kept the input and output communications simple namely a wearable control bracelet or necklace with buttons for basic functions e.g. lift up/down as well as an emergency stop button which would signal that assistance was required.
Ethical issues were key, both in terms of the collection and storage of personal data but also the psychological aspects of Squishy interacting with humans. We decided to only collect the minimum personal data required for a safe and comfortable interaction such as height, weight and BMI (which could be combined with skeleton tracking data) with the individual requiring assistance only being identified by a unique identifier. Data would be stored in a safe storage system such as Databox. Databox is an EPSRC project involving collaborators from Queen Mary University of London, the University of Cambridge and the University of Nottingham and a platform for managing secure access to data. All our data processes were GDPR compliant.
The individual’s response to and relationship with Squishy was also central to the design both in terms of the COBOT’s appearance, feel and touch and the use of slow, comfortable movements which engender relaxation and trust.
Having discussed and honed our design ideas we then had to consolidate them into five slides and a five-minute presentation! We were each involved in different aspects of the brief following which we collectively refined the slides until we had the final version. Getting across the key elements in five minutes proved to be a challenge; with our first run-through coming in at closer to seven and a half minutes but on the day, we just managed to finish on time. It was interesting to see how many people really struggled with the time challenge, and I am sure my experience at summer school will be useful for when I enter the Three Minute Thesis (3MT®) in 2021…
And the outcome of all this hard work and collaboration? I am delighted to report that The BOTtom Wipers and Squishy won the COBOT challenge. J
Hall, D., Williams, C., 2014. Big Hero 6 (Film). Walt Disney Animation Studios.
Perrault, A. A., Khani, A., Quairiaux, C., Kompotis, K., Franken, P., Muhlethaler, M., Schwartz, S., & Bayer, L. (2019). Whole-Night Continuous Rocking Entrains Spontaneous Neural Oscillations with Benefits for Sleep and Memory. Current Biology, 29(3), 402-411.e403. https://doi.org/https://doi.org/10.1016/j.cub.2018.12.028
Umedachi, T., Shimizu, M., & Kawahara, Y. (2019). Caterpillar-Inspired Crawling Robot Using Both Compression and Bending Deformations. IEEE Robotics and Automation Letters, 4(2), 670-676. https://doi.org/10.1109/LRA.2019.2893438
As we all know, the current pandemic has banned mass gatherings for the foreseeable future. This has meant cancelling academic conferences or trying to deliver them online. When submitting for The British Academy of Management (BAM) conference in February 2020, I was looking forward to an exciting trip to Manchester involving lots of networking with fellow PhD researchers and more established academics. But, as it happens, the virus is still spreading amongst us, and instead of a nice Manchester-stay and sophisticated conversations over fancy conference meals, I was stuck in my study in front of a computer screen for four full days, detaching myself from my laptop only to grab a quick bite to eat.
The postponement of the live conference until 2021 was initially somewhat of a disappointment. But when the time arrived, however, (and perhaps motivated by the long-lagging six weeks of school holidays), I was, in fact, quite excited to usher my child back to school and start engaging with the online event. I was hoping that the symposium could offer me an opportunity to meet other early career researchers specialised in entrepreneurship and/or cultural and creative industries, and also, as my professional background is in event management, I was keen to see how an event that traditionally takes place in a live setting can be adapted into an online environment.
For those of you who are not familiar with BAM (founded 1986), they are the leading authority on the academic field of management in the UK, with a mission of supporting and representing scholars, as well as engaging with the international business and management community. BAM has over 2000 members, out of whom 25% are located outside the UK, and 30% are PhD students. They publish two academic journals; the British Journal of Management and the European Journal of Management Research, and every year they organise the BAM conference, which usually gathers around 900 delegates worldwide.
This blog post is about the doctoral symposium, which kicked off the conference week with a one-day event and was followed by the three-day main conference. I have two focuses with this post. First, I will highlight some key moments from the symposium, which I hope can be useful for other postgraduate researchers. Second, I will discuss the online conference experience from a participant’s as well as an event manager’s perspective.
The BAM Doctoral Symposium brought together over 160 participants from all over the world for a full day of Zoom sessions aimed at postgraduate researchers. In the welcome speech the organisers stressed that thanks to the online delivery of the event, the event had attracted more international participants than in the previous years. This was most likely due to the online conference format being significantly more affordable with a lower conference fee and lack of travel costs. The symposium was fully conducted on Zoom, and the programme was split into sessions aimed at both early and later stage PhD students, meaning that we could decide ourselves which sessions were the best suited for our needs.
The first highlight of the day was a session aimed at early-stage PhD students, entitled “Conducting a literature review” and led by Prof David Denyer & Dr Colin Pilbeam. Although I’m already starting the 3rd year of my PhD, I have recently been revising my literature review (a never-ending task, I guess…), and therefore decided to join this session. The presenters began by stressing that it’s fundamental to get the literature review right, as otherwise is very hard to justify the research question. Next, they moved on to discuss, among other things, the common traps PhD students fall into when writing a literature review. Out of the seven traps, I found that “the broad and unfocused review question” trap resonated strongly with me. I couldn’t help thinking back to the numerous hours I’ve spent reformulating my review questions in the past year. Below is a useful example of a question that lacks focus and needs narrowing down.
What was particularly good about this session, however, was the presenters’ use of the recently launched Mentimeter software, which enabled them to conduct live polls during the session. Mentimeter has also a free plan that is suitable, for example, for students, and even some of the premium plans are quite affordable. I’m definitely on trying it out with my next online presentation.
The second highlight of the symposium relates to the perhaps most dreaded question amongst PhD students. I guess each of us is at least once during the PhD journey faced with the question “What is your contribution to theory?” In fact, this happened to me already in my first annual review, and hoping to provide a better answer next time, I signed up for Professor Ashley Braganza’s session entitled “Exploring Theory as a Lens for Research”.
Professor Braganza began his presentation by underlining the relevance of the unit of analysis to doctoral studies. To be honest, I had never asked myself what the unit of analysis in my research was, but I do agree that the question seems highly pertinent and can guide you in the right direction when selecting your theoretical framework.
In addition, he argued that when selecting a theory or theories, you should not go over three theories, but try using a maximum of one or two theories, and the rest can go under the research context. This was somewhat comforting to me, as it is a topic that I have struggled with over the summer while revising my literature review and trying to locate my conceptual home in the broad array of multidisciplinary literature. Again, I think that having too many theories to choose from is very likely to be a common issue for interdisciplinary researchers.
The final highlight of my symposium day was a paper presentation session, where selected PhD students were presenting their research, and were offered a chance to receive some feedback from the audience as well as a senior academic. I decided to watch two presentations in the entrepreneurship track, focusing on female and family entrepreneurship. It was very inspiring to hear of fellow PhD students’ work, and compare their journey to my own. I didn’t submit a paper to the doctoral symposium, as our developmental paper had been accepted for the main conference, and I found that being in the middle of my second phase of data collection wasn’t probably the best time to present findings from my research. I did end up regretting my decision though, and blaming my perfectionism for it, as in fact, it would have been totally fine to present some initial findings from my research. Nevertheless, it was reassuring to realise that others’ papers were nowhere near to be finished or polished pieces of writing. I guess the main advice I can give to other PhD students is that you shouldn’t hesitate to submit to doctoral symposiums, even if your work is still far from perfect. Luckily, I will still have a chance to do so next year.
The online experience
From a technological perspective, everything worked smoothly, and Zoom proved to be an adequate platform for hosting over 160 people for a one-day event. The larger sessions were conducted on Zoom webinar mode, meaning that cameras are only on for the speakers, while everyone else was muted and could asks question in the Q&A section. During the smaller sessions, we were able to have our cameras on and join the conversation if we wished to do so. However, Zoom is not a conference platform as such. This meant that every session had a different URL, and you had to constantly navigate between the PDF programme, where the video call links were, and the actual Zoom app. As I was also taking notes, and occasionally tweeting, the whole experience was quite laborious as it involved continuous switching from one browser page onto another depending on what I wanted to do. I do hope this pandemic and the increased demand for all-in-one virtual conference platforms will motivate companies to develop more user-friendly platforms that can handle large numbers of participants, as I haven’t yet been lucky enough to come across one.
To sum it up, a virtual conference simply cannot compete with the real thing – at least not yet. However, until technology develops enough to provide a smoother experience, it remains the best option we have for maintaining at least some aspects of normality. That said, the main difference between the online and live conference formats was undoubtedly the lack of ad hoc networking opportunities, those random conversations that in real life might take place during a coffee break, or over a conference meal. For instance, we were able to ask questions during presentations on the Zoom Q&A section, but there were no real opportunities to engage in a conversation with fellow PhD candidates.
It is evident that spontaneous digital engagement between people who have never met each other is extremely difficult to achieve in a virtual setting, as it doesn’t occur naturally or without encouragement. However, I do realise that this is only the beginning of what might be an era of virtual events, and the technology for conducting this type of events is developing at a rapid pace. I think that it’s important also that event organisers take the risk of experimenting with different software and try out new ways of structuring their events. For example, and perhaps because of time restrictions, this event didn’t have any time allocated for networking, meaning that 160 participants didn’t have a real opportunity to meet each other. This was not the case during the actual BAM conference, where plenty of time was allocated to networking in custom-made virtual coffee rooms, but as it was not compulsory or moderated, only a handful of delegates made the effort of engaging with others.
From the virtual doctoral symposium experience, I think the main takeaway is that it sparked my appetite for the BAM live conference and introduced me to a like-minded, extremely inclusive research community. As my research basically sits in between arts and business, I remember that last year after participating in the Audience Research in the Arts conference, I did end up feeling a bit like an interdisciplinary alien. That’s why I’m happy to say that with the BAM conference this year I didn’t feel like that at all, on the contrary, I felt like the business and management community was (virtually) embracing me quite warmly. My fingers are tightly crossed in the hope that in 2021, I can take part in the doctoral symposium and BAM conference in person and I’ll promise to be very grateful for every little opportunity for small talk or any form of live conversation.
Publishing a Paper about In-Person Interactions at a virtually held Conference (CHI PLAY)
The National Videogame Arcade (now National Videogame Museum) is a games festival-turned-cultural centre that celebrates games and the people who make, play and interact with them. While more traditional museums might now allow visitors to directly interact with their exhibits, the NVM encourages this direct interaction in an open, genuine way: “Games are for everybody” is one of the core values of the museum. Outside of being an educational and creative hub for everything games-related, it is also my PhD’s industry partner.
Before the NVA moved to Sheffield in September 2018 to become the NVM, I was lucky to join my research partner for most of their last month in Nottingham. Since my research is centred around exploring ideas of (self-)care through a justice-, collective- and games-informed lens (to paint a picture with broad strokes), I was very keen on figuring out how people in the NVA made sense of it, how they created meaning in their interactions with the space and others: To do so, I joined visitors during their “journey” through the NVA: I watched people play games, enjoy themselves (or get frustrated!) and share and make memories.
After carefully analysing the data (and a couple of busy months with other studies), I asked two of my fellow doctoral researchers if they would be interested in exploring the data once again and writing a paper together: Gisela Reyes-Cruz (my go-to-person for everything interaction and ethnomethodology-related!) and Harriet “Alfie” Cameron (who has a keen eye for everything museum- and power-structures-related).
Together, we wrote and submitted “Plastic Buttons, Complex People: An Ethnomethodology informed Ethnography of a Video Game Museum” to CHI PLAY 2020, where it got accepted!
The paper explores how people interact with each other through establishing practices in and around the playable exhibits, but also between each other. It finishes with some ideas and design implications that spaces that want to engage people in co-located play can take on to support or disrupt group interactions.
Here is our abstract as a teaser:
“This paper reports on an ethnomethodology-informed ethnography of a video game museum. Based on 4 weeks of ethnographic fieldwork, we showcase how groups of visitors to the museum achieved the interactional work necessary to play games together and organise a museum visit as a social unit. By showcasing and explication excerpts of museum visits we make the taken-for-granted nature of these interactions visible. Embedded within an activity map that outlines how people prepare, play, wind down and exit games, we showcase the sequential, temporal, and carefully negotiated character of these visits. Based on our findings and the resulting Machinery of Interaction, we propose three design implications for spaces that aim to exhibit video games and/or try to facilitate co-located, collective video gameplay.”
It is a weird feeling to present our first paper in a virtual format, but it is a tremendously joyful occasion (and a very enriching learning experience)!
This paper would not have been possible without the support of my supervision team Martin Flintham, Pat Brundell and David Murphy; the lovely and wonderful folks at the NVA/NVM, all of the visitors who took part in research and Stuart Reeves for valuable comments on one of the earlier paper drafts!
Hello everyone. I hope you are doing well and are well-supported during these trying times. This period has allowed me to reflect on two public engagement activities that I took part in last year. The activities involved two exhibitions where I had the opportunity to engage with the public presenting three Mixed Reality Lab (MRL) creations in two venues (MRL at CHI, 2019 and Halfway to the Future, 2019). The MRL creations that I presented were:
A touch-based two-player cooperative video game, in which each player holds a sensor stick with one hand and touches the hand of the other participant to control how low and high an airship can fly. Participants have to find the sweet spot to fly the airship to collect coins and not run out of gas.
A virtual reality (VR) experience that plays with notions of control, sensory misalignment, and vertigo. Participants are placed on top of a swivel chair which is rendered as a bolt by the VR interface and attempt to unscrew it by turning.
A VR exploration of sensory misalignment, in which people use the motion of a swing to control their navigation through the virtual world. The VR interface couples the motion with movements in the game.
In the two venues, people from different backgrounds came to try the above experiences: researchers from the field of human-computer interaction (HCI), employees of technology companies, people who don’t work in technology-related fields, and people who accompanied attendees of the conference, from adults to children, all participated in the experiences.
People were interested not only in having the experience, but also in knowing more about the technological, creative, psychological, and other components of the creations. People wanted to know what technology was used and what happened in our minds that made it possible for the creation to integrate the virtual and physical. Given that people had different levels of expertise in the area, it was necessary to give explanations according to people’s needs. In a nutshell, the response of a professor will differ from that of a child.
Overall, I found the exhibitions a challenging task. There were difficult moments when people were not satisfied with my answers, and I had to find other ways of explaining the mechanics of these VR experiences. I found it very useful to relate my explanation to common knowledge through metaphor and analogy that people could easily make a connection with them.
It was a very enriching experience given that I had the opportunity to talk about interactive experiences, VR, and motion sensing with people from different backgrounds. Those topics are very different from the area of research on which my PhD is focused, which is on design ethnography and smart products.
In general, the experience was very fulfilling. I had the opportunity to share my time exchanging ideas with people and engage in thought-provoking conversations. It has taught me valuable skills related to speaking about and explaining my research with a diverse audience. I consider some positive outcomes of my participation in these public engagement activities to be:
The opportunity to practice critical thinking and think about the relevance of our research for the real world
Improvement of communication skills, as we are accustomed to talking to an academic audience, and this kind of experience gives the opportunity to practice with a non-academics
Building confidence as we have to be prepared to answer all kinds of questions on the spot and without prior preparation
Personal satisfaction from sharing with society part of the research that we conduct in the confined spaces of the lab
I hope that shortly we are able to take part in such open and diverse environments again.
At the end of 2019, as a Horizon CDT student at Nottingham University, I was attending a workshop (called Computer Vision for Physiological Measurement) in Seoul, South Korea. This workshop mainly focused on the research of applying recent advances in computer vision to measure the human physiological status.
This year, there were more than 50 people from company or academic institutions attending this workshop and 19 of us were giving talks to share the research we did and discussing the potential future research directions.
During this workshop, I gave a talk about how to apply state-of-the-art machine learning techniques to automatically detect emotions from people’s faces. In particular, it utilised people’s facial muscle movements to infer emotional status. This technique can be further applied to other purposes, as facial dynamics can reflect many different human statuses.
More importantly, how to apply such techniques to benefit our daily life was also discussed. For example, it can be further extended to make a quick and objective judgment about someone’s mental health, such as depression, or predict someone’s personality. Specifically, fast and automatically understanding human personality is important in employment. It can help employers to better recognise which candidates are more suitable for the job, and more willing to work in a group. Mental healthcare is another potential application. For example, while it is expensive and time-consuming to find mental health experts to diagnose mental health, such a technique can provide a cheap, quick, and objective assessment to most patients as well as provide more useful information for related doctors.
In short, such techniques have great potential to improve the business and quality of our life. For investors, it could be a good direction to invest money and time.
Since this workshop was with ICCV conference, at the end of this event, I had a great time in the banquet and had nice chats with other attendees.