Reflection on outreach: Dstl AI Fest 4

post by Matthew Yates (2018 cohort)

In October 2021 I gave a presentation on my PhD project at Dstl’s AI Fest 4. This is a now annual event held by Dstl and attended by various government departments, industry partners and academic researchers. The event was held online over a virtual conference meetings platform with over 100 different talks from AI experts over the course of two days.

The central aim of the event was to discuss topics surrounding “Trustworthy AI” with Dstl stating their mission is “to de-mystify the area of AI by helping MOD understand how it can responsibly and ethically adopt AI in order to deter and de-escalate conflict, save lives and reduce harm.” Like other virtual conferences that become the norm during the pandemic there were different panels where attendees could interact with the speakers during Q&As as well as various networking rooms to talk to others about research.

I presented my work as part of the “AI Methods and Models” panel, with my then-current working title for my PhD and presentation being “Accurate Detection Methods for Image Synthesis”. As I had done multiple previous presentations for my PhD at various stages of the project I knew the content quite well but made some adjustments for my specific audience and what I wanted to focus on as the key message of my work. As I was presenting a work in progress, with still a year of research to go, I decided it would be best that I focus on the importance of the interdisciplinary methodologies I was using for my project rather than any final models or results (which I was still currently working on). I also thought it might help differentiate mine from the rest of the panel which was concerned with various novel implementations of machine learning models.

In my presentation, I gave a brief overview of myself and my project’s background as well as a short explanation of Generative Adversarial Networks for those in the audience who were less familiar with deep learning models. The main content of my presentation however was my mixed methods approach to looking at fake image detection. I explained how as the objective of a lot of fake image generation is to fool human visual perception (e.g. fake news, deep fakes etc) taking a human-centric approach to investigating novel detection methods is equally, if not more important, than looking at purely algorithmic solutions. I then presented the results of my initial image detection study which found differences in detection behaviour between computational and human detection methods as well as differences between experts and novices. As this was the part of my research plan I was currently up to, I spent the rest of the presentation discussing the implications for my current results and how they were going to inform the rest of my work. At the end of the presentation, I took a short Q&A with questions about possible other metrics I could use to measure human visual perception towards image detection like eye tracking and also how to combine this research with automated methods. Both of these questions were easy for me to answer as fortunately they were both ideas that I had planned to explore myself at the final stage of my research.

On reflection, I thought I had gotten my main points across in an engaging way and had been able to communicate it to people of differing levels of technical backgrounds, however, with it being held online it was much more difficult to get a sense of this than if I had been presenting face to face. Although presenting to a screen can sometimes alleviate any nerves in presenting to a large, live audience, I find it can also be quite hard when you don’t have any visual feedback from audience members present.

Despite some of the reservations I still have towards online conferences I did find my experience presenting at Dstl’s AI Fest useful. In addition to the experience of having to communicate my research to a live audience, it was also a useful opportunity to get to know other people either at Dstl or in industry. The timing of the event coincided with my 3-month Dstl internship so some of the people I was working with at the time also attended the conference and could get an idea of what kind of work I was doing at Horizon and the Computer Vision Lab for my PhD project.

 

Presenting the Future of Healthcare at the Cobot Maker Space

post by Angela Higgins (2022 cohort)

Students from the Horizon Centre for Doctoral Training were hosted by the Cobot Maker Space to present their visions of healthcare utopias, and how to avoid ending up in a medical dystopia.

From robotics and AI to health tracking and gene sequencing, are we headed towards a utopian or dystopian future for healthcare? For Future Products Sector Day at the Horizon CDT programme, our group were asked to present our future visions of healthcare to the PhD cohort. Using the range of robots available in the Cobot space, we demonstrated and discussed how these technologies could be used to the benefit or detriment of human wellbeing in the future.

Demonstrations included a UV cleaning robot which could be used to disinfect hospitals, telepresence robots which could be used for remote doctors’ visits, and companion robots for older people. In a walkaround tour of the living space, an experimental area designed to simulate a home living room and kitchen, we demonstrated how Internet of Things sensors could be used to monitor activity and help people track their health. These technologies have great potential to allow people to take control of their wellbeing, keep medical professionals in the loop and ultimately allow people to live in their own homes for longer. However, this raises questions about surveillance, data protection, privacy, and dignity for older people.

Academics from the University of Nottingham contributed their expertise, including Professor Praminda Caleb-Solly, who spoke about her work researching robots to help support older people. Praminda spoke about work at the CHART research group, and how robots could be used to enable and enhance human-human interaction for health and care, rather than replace it. Epidemiology PhD student Salma Almidani also spoke via video interview, discussing future pandemics, vaccine hesitancy, and where technologies may be useful in healthcare ecosystems of the future.

The afternoon was finished off with the talks from us, in the 2022 cohort. Jon Chaloner spoke about universal health coverage and how we can work towards providing everyone worldwide with a full range of accessible healthcare across their lifetime. Gift Odoh talked about how healthcare and telepresence robotics can influence and benefit each other, through technology and knowledge exchange. Finally, I closed by talking about the future of pain, and how we could use robotic devices and sensing technology to better understand and respond to and manage pain.

The afternoon provided a range of emerging perspectives on the future of healthcare, and debate about how these technologies could be used and abused. Ultimately, by discussing and exploring imagined utopias of the future, perhaps we can identify routes to get there, whilst avoiding some of the dystopic pitfalls along the way.

‘Interactions with Coffee Wizard’: Reflections on a first conference paper

post by Oliver Miles (2018 cohort)

Overview

In my first paper, ‘Interactions with CoffeeWizard’ I gave an account of participant interaction with a values-orientated choice and prediction framework, embedded through a coffee selection box activity in the home. It was originally submitted for the Computer Supported Cooperative Work (CSCW) 2021 conference, but despite feedback citing key points of merit including good alignment venue, a sound methodology, and agreement with the general line of argument, the paper was unfortunately rejected at first pass. Despite reading and re-reading the reviewer’s comments at the time, it is only now during the process of writing up a new study based on the same framework that I fully appreciate and can apply the suggested changes.

In the following, I will give an overview of the paper, the motivation for writing it and for specifically choosing the CSCW venue. I will then reflect on the more practical points of collaboration and advice during paper drafting, before outlining some of the feedback I received and how I intend to improve my next submission based on this. Hopefully, sharing some of my insights regarding responding to reviews can help others – particularly if they are writing their first paper as a solo author.

‘Interactions with CoffeeWizard’

Broadly, my thesis explores the use of discrete value sets such as product attributes and personal end-goals in life, as grounds for personalization in the recommendation of everyday coffee consumption. The purpose of my first study was to deploy a novel interaction framework for surveying, predicting, and eliciting retrospection on personal value preferences, elicited through coffee choice selection and reflected the participant as infographics. Realised through the initial questionnaire, coffee selection activity, and follow-up interview, this would allow me to demonstrate the kinds of interaction elicited at each stage of the framework and improve the proposition so that it captures this as rich, contextual data. The study was delivered as a domestic deployment due to Covid-19 restrictions at the time. In the findings, I present and discuss the results of interviews with 12 participants, whose reflections enabled a discussion based on the emergent, practical values of selecting coffee based on its reputed value attributes and making choices incongruent/congruent with apparent predictions.

Motivations

In terms of motivation for the paper, I wanted to share my theoretical ideas with peers in the human computer interaction (HCI) community whose work tends towards testing and developing prototypes. I chose the CSCW conference as it positions itself as ‘…a premier venue for presenting research in the design and use of technologies that affect groups, organizations, and communities’[1], which is well aligned with the practicalities of incorporating social research and HCI methodology. More broadly, this was my first opportunity to formally share ideas with an HCI audience whose common idioms can be challenging to adopt when approaching the field from another discipline.

Paper preparation

If you are completely new to paper writing like I was for this piece, I would recommend attending any of the Research Academy courses related to effective writing as soon as you get the opportunity. Paper writing is fundamentally different in my experience from other forms of academic writing as it requires your work to retain its originality while at the same time reflecting the nuances of the conference or journal, not to mention strict formatting and editorial guidelines.  Supervisor feedback is therefore also crucial during the drafting process.  It can be tempting to wait until you have entire finished sections or even a full draft before you seek feedback. To counter this, I have found that the following ‘skeleton paper’ approach works for me:

    • Outline the known section headings from ‘introduction’ to ‘conclusion’
    • Break these down further into the main substantive points you wish to cover
    • Ensure there is a clear narrative that will bring your reader to the intended contribution

This can effectively read as a full draft while allowing efficient iterations, which can be further substantiated once the main concepts and contributions become coherent.

Handling Reviewers’ Comments

It is easy in hindsight to see the extent to which I was/was not following my own advice regarding paper preparation. On the one hand, reviewers picked up on some key merits which resonated with my intended contribution: The proposition appeared to give ‘insightful’ findings; the issue of value-based personalization was agreed as a relevant interactional one, and the methodology itself was judged to be appropriate. These points were generally very encouraging given the wider implication of the validity of my thesis.

Nevertheless, under-developed contributions and literature selections were significant enough to result in rejection. In the first case, I reflect that I had not invested enough in the preparation of the paper specifically, a paper for the CSCW audience. This left reviewers with a sense that they were being left to draw out findings relevant to them, instead of having them clearly outlined. This is intrinsically linked to the second concern regarding literature selection. I had only referenced one CSCW publication, with the rest of my sources coming from other conferences or journals. So, while the material I based my work on was described as ‘relevant’, it made it difficult for reviewers to link any contribution back to work specifically emanating from the venue itself.

Role of paper within PhD

‘Interactions with CoffeeWizard’ continues to play a significant role in my thesis as the first deployment of a novel, values-orientated personalization framework. Expanding on the literature section and aligning the contribution more closely with contemporary works from CSCW, the work now forms the first empirical chapter of my thesis. In this sense, I hope that reviewer feedback has improved the communication of my work for my PhD itself, as well as informed me how I am currently approaching write-up of my final study to a similar venue in 2023.

[1] https://dl.acm.org/conference/cscw

Call for Participants: STUDY OF THE PSYCHOLOGY OF EXCESSIVE VIEWING ON DEMAND

post by Joanne Parkes (2020 cohort)

As some of you know, I’m a 3rd-year Horizon CDT PhD student partnered with BBC Research & Development and based within N/Lab at the University of Nottingham. I am researching binge-watching behaviours and how we might better manage them if they’re problematic.

Purpose: For this study, I am looking for participants to take part in a 1:1 interview via an online Teams meeting to discuss their viewing habits, perspectives on binge watching and thoughts on why/when people might watch more than they intend.

Who can participate? This study is open to anyone aged 18 and over who regularly (typically at least once a week) watches 2 or more episodes of the same programme and/or 2 or more continuous hours of on-demand television as their main activity.

Commitment: The interview should take around 60 minutes to complete.

Reward: £15.00 Amazon e-voucher for your participation.

How to participate: Email me at joanne.parkes@nottingham.ac.uk to express your interest and arrange a mutually convenient meeting time. Evenings and weekends will also be available.

More information is available. For any queries, please feel free to contact me using the email address provided.

Read more about my research project.

Call for Participants: Fake Image Detection w/ Eye tracking

post by Matthew Yates (2018 cohort)

I am a final year Horizon CDT PhD student partnered with the Dstl. My PhD project is about the detection of deep learning-generated aerial images, with the final goal of improving current detection models.

For my study, I am looking for participants to take part in my short face-to-face study on detecting fake aerial images. We have used Generative Adversarial Networks (GANs) to create these.

I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (e.g. aerial imagery, satellite images) or GAN-generated images.

Purpose: To capture gaze behaviour during the detection of GAN generated fake images from real aerial photos of rural and urban environments. Participant accuracy and eye movements will be recorded.

Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.

Commitment: The study should take between 20-40 mins to complete and takes place in Room B73 Computer Science Building Jubilee Campus

Reward: £10 amazon voucher for your participation

How to participate? Email me at matthew.yates1@nottingham.ac.uk with dates/times that you are free to arrange a timeslot

For any additional information or queries please feel free to contact me,

Thanks for your time,

Matt

+44 (0) 747 386 1599 matthew.yates1@nottingham.ac.uk 

Map Check

post by Vincent Bryce (2019 cohort)

This summer saw me walking St Cuthbert’s Way, a 100km hiking trail in the Scottish Borders/Northumbria area with my children. It was a great trip, plenty of challenge but achievable, and I’d recommend it:  https://www.stcuthbertsway.info/long-distance-route/

The trail was well signed, but needed us to use our map and compass in places. It’s three years since I started the PhD, two of which were part-time, and it feels like a good time to check where I am and where I’m going:

Where am I

I’m starting the third year of a part-time PhD in the Horizon CDT, focussing on responsible research and innovation (RRI) and Human Resource information systems. This is about exploring how organisations can innovate responsibly with digital technologies, the challenges this involves, and some of the specific issues for HR technologies.

I’ve chosen a Thesis by concurrent publications route => a set of related studies rather than one overarching thesis.

Where am I going

I am going to complete my PhD and plan to come back into full time HR work, applying the insights into my digital HR work. The experience of being a student and researcher at the University I work will help me keep a strong customer focus.

What have I done so far

Following a year of taught activity about a range of digital economy and computer science topics, I’ve completed a series of studies and articles.

Highlights include a study on published Responsible Innovation case studies exploring the benefits of RRI, pieces on HR analytics and their ethical implications, presenting at the Ethicomp, CIPD Applied Research, and Philosophy of Management conferences, and critical articles on wider challenges for responsible innovation such as low-code technologies and crosscultural aspects.

I’ve seen new ideas and emerging technologies, and built skills in cding, data science, writingm writing to data science, and from bot based blogging, digital watercoolers and AI coaching to augmented and virtual reality tools.

What are my main findings to date

  • Responsible innovation practices are associated with business benefits.
  • Digital technologies, in particular ones users can reconfigure for themselves, pose challenges for responsible innovation methodologies, because these tend to rely on the technology being developed in ways which anticipate and respond to societal needs. End users, rather than scientists and developers, are increasingly able to innovate for themselves.
  • Algorithmic HR technologies give HR new capabilities, but are linked to some ethical concerns and have features which imply a need for responsible innovation and implementation.
  • Interviews with HRIS suppliers have limited opportunities to engage wider stakeholders and anticipate downstream impacts, creating reliance on client organisations to reflect on how they apply the technologies.
  • The knowledge and values of HR practitioners are a critical constraint on responsible algorithmic HR adoption.

What are my priorities for the coming year

Completing my thesis synthesis document; concluding in-progress studies on the increasing scope of employee data collection, and HRIS supplier and practitioner perspectives; and getting in position to submit by Sep 2023.

Right – onwards! I’ve recently attended the Productivity & the Futures of Work GRP conference on Artificial Intelligence and Digital Technologies in the Workplace to present about my study on the increasing scope of employee data collection and hear about what’s hot and what’s not.

originally posted on Vincent’s blog

AI, Mental health and the Human

post by Shazmin Majid (2018 cohort)

Pint of Science 2022 – Bunkers Hill, Nottingham

Venue:
Pint of Science 2022
Bunkers Hill, Nottingham

I delivered a talk about AI, mental health and the human at Pint of Science 2022 this year which had the theme “A Head Start of Health”. Pint of Science is a grassroots non-profit organisation that runs a worldwide science festival and brings researchers to a local pub/café/space to share their scientific discoveries with you, where no prior knowledge is needed. There are over 24,000 attendees in the U.K with over 600 events in over 45 cities. There were three talks at the time focusing on the theme of mental health.

Structure of the talk:

    1. What is AI
    2. How AI is being used in mental health
    3. AI and mental health: my cool experiences
    4. My current issues with AI and mental health

After days of practice and even delivering the jokes on cue whilst in pj’s in the comfort of my living room, the day for presenting arrived. Those that know me, know that I’m not too shy when it comes to presenting but this felt different and I really wanted to get the crowd engaged, and practise good storytelling. I arrived on the day and was welcomed, especially by fellow Horizon-er Peter Boyes who was the one who suggested my talk to the Pint of Science crew. I learnt that I would be the last talk and I did something I have never done before, I walked up to the bar and ordered a big old pint, a packet of crisps and enjoyed the wait.  Normally, I would find this process to be mildly agonising, having to wait until it’s your go. My parents have got a collection of photos of me when I was a child having to wait for a funfair ride. Let me set the scene – fists in a ball screaming at the top of my lungs. I guess that never leaves you which is why I’d much rather go first. The pint helped.

My talk aimed at providing a whistle-stop tour of the ways I’ve interacted with AI and mental health. To start off by loosely introducing AI, providing some of the state of the art ways that it’s being used, provide a summary of the ways I’ve got to engage in the sector and present what I consider to be current issues on this. I can say, this is not how it went down. I was approximately 3 slides in and then was hit with an image that’ll never leave me and this was a black screen with the text “slide show ended”. And it was right at this moment that I realised that I had sent over some butchered version of my slide show. I had only one copy of the slides which I had sent over – how could this happen! I also realised that I had saved the slideshow on my *desktop* (like, seriously, who does that!) with no remote drive links sprinkled in fairy dust to access it. A sudden wave of appreciation of being last hit me like a wave because the crowd just bobbed along as on average everyone was around 3 pints down!

Pete and I scrambled in the corner to find another presentation I could quickly deliver and we settled at an older MRL lab talk about a piece of research I had published. This piece of work explored the extent of user involvement in the design of mental health technology And lo and behold, the new structure:

The new structure of the talk

    1. Background of mental health technology
    2. The research questions
    3. The method of exploration
    4. Our results
    5. What we recommend for the future

Getting into the nitty-gritty:

    1. Background of mental health technology

Self-monitoring applications for mental health technology are increasing in numbers. The involvement of users has been informed by its long history in Human-Computer Interaction (HCI) research and is becoming a core concern for designers working in this space. The application of models of involvement,  such as user-centered design (UCD), is becoming standardised to optimise the reach, adoption and sustained use of this type of technology.

    1. The research questions

This paper examined the current ways in which users are involved in the design and evaluation of self-monitoring applications, specifically for bipolar disorder by investigating three specific questions a) are users being involved in the design and evaluation of technology?  b) if so, how is this happening? and lastly, c) what are the best practice ‘ingredients’ regarding the design of mental health technology?

    1. The method of exploration

To explore these practices, we reviewed available literature for self-tracking technology for bipolar disorder and made an overall assessment of the level of user involvement in design. The findings were reviewed by an expert panel, including an individual with lived experience of bipolar disorder, to form best practice “ingredients”  for design of mental health technology.  This combines the already existing practices of patient and public involvement and human-computer interaction to evolve from the generic guidelines of UCD to ones that are tailored towards mental health technology.

    1. Our results

For question a), it was found that out of the 13 novel smartphone applications included in this review, 4 self-monitoring applications were classified as having no mention of user involvement in the design, 3 self-monitoring applications were classified as having low user involvement, 4 self-monitoring applications were classified as having medium user involvement and 2 self-monitoring applications were classified as high user involvement. In regards to question b), it was found that despite the presence of extant approaches for the involvement of the user in the process of design and evaluation, there is large variability in whether the user is involved, how they are involved and to what extent there is a reported emphasis on the voice of the user, which is the ultimate aim of design approaches involved in mental health technology.

    1. What we recommend for the future

As per question c), it is recommended that users are involved in all stages of design with the ultimate goal to empower and create empathy for the user. Users should be involved early in the process of design and this should not just be limited to design itself, but also associated research ensuring end-to-end involvement. The communities in the healthcare-based design and human-computer interaction design need to work together to increase awareness of the different methods available and encourage the use and mixing of the methods, as well as establish better mechanisms to reach the target user group. Future research using systematic literature search methods should explore this further.

Closing remarks

Adaptability is the moral of the story here! Practice can make perfect but in the end, technology failed me even though my talk was about technology – ironically! I guess I was more proud of delivering the talk in this haphazard way, compared to if I delivered on cue like I practised. Another reflection that I made is that after 4 years of doing a PhD, it’s interesting how you can naturally talk about the topic at hand – so rambling for 20 mins just flowed. Talking about your PhD for a non-technical audience was also a very interesting experience too and a great experience to practise good storytelling.

 

 

What is Pint of Science?

post by Peter Boyes (2018 cohort)

“The Pint of Science festival aims to deliver interesting and relevant talks on the latest science research in an accessible format to the public – mainly across bars, pubs, cafes and other public spaces. We want to provide a platform which allows people to discuss research with the people who carry it out and no prior knowledge of the subject is required.”

This was a new one for me, a collision of worlds. I’ve spent the last 8 years in Nottingham, studying for my undergraduate, master’s, and now PhD. I did some extra bits with my course, PASS leader in its first year in the school of mathematics, and some events here and there as a PhD researcher, but I stick mostly to my studies and then explore volunteering in places beyond academia. I’ve enjoyed helping coordinate sports clubs and competitions since joining university, but Pint of Science arose as an opportunity to combine my two halves. Volunteering and putting on events related to my studying.

I got involved at first as a general body to lend a hand on a couple of the nights, but moved into a Theme Lead role early on in the year when an opening popped up. About 9 months ago myself and my team of fellow volunteers were allocated our theme (Beautiful Mind – anything around human senses and the brain) and we set about recruiting speakers and planning our event. We had 3 evenings at Bunkers Hill, Hockley to fill, and grouped our 9 speakers into similar topic areas. These topics covered broadly Pain, Senses, and Mental Health. We checked out the venue space, and planned out schedules for the nights, with presentations, Q&As, and some activities for the audience such as quizzes (what else do you expect on a weeknight in a pub when you’re talking science). May flew round, and tickets got snapped up. The nights went fantastically, there was a buzz with the great speakers and the final night in particular packed out the venue space to end on a high note.

This side venture was a little outside my comfort zone, yes I’m familiar with volunteering and running events, and I’ve been in academia for 8 years, but the theme wasn’t in my area of expertise and science outreach is a new experience for me. I was supported well throughout by a great team more familiar with the topics and events like this one. I’ve learned a lot about outreach through these nights. This was for me about learning how to facilitate public outreach and conveying cutting edge research and expert topics to the general public, no easy task. The most revealing part of each night was being able to listen to the speakers talking to each other, some seasoned Pint of Science-ers, some new to the event. I also had the privilege of facilitating fellow Horizon CDT 2018 cohort member Shazmin Majid presenting her latest work.

This experience has given me confidence in presenting my work and how to go about it, equally how not to go about it. Avoid overloading slides with text and too much inaccessible specialist terminology. It’s fine to use some if you define them and get the audience up to speed, but need to find other ways to convey your research if every slide needs 5 terms defining or sub definitions, it breaks up any flow and makes it difficult to follow, particularly for non-experts in the field. Analogies are great, again not too many, and not too convoluted. I have been given advice before on using analogies as they can lead to misunderstanding of concepts if followed too afar, but a well-crafted one can enhance the audience understanding. Demonstrations or activities that let the audience learn through involvement rather than relying on a perfect explanation also seal the deal on a great outreach talk. The simpler the demo the more effective. Though, doing any of those things is no easy task.

I would encourage other CDT students to get involved in the coming years from either side, later stage PhD students and recently graduated alumni have a great opportunity to put your work out there. Early stage candidates should see how other researchers slightly further along your journey are engaging with this sort of outreach, it might even give you ideas about your own research.

 

 

 

Summer School Participation Reflection

post by Matthew Yates (2018 cohort)

I participated in the 2022 BMVA Computer Vision summer school which was held at The University of East Anglia. The summer school was aimed at PhD students and early-stage researchers who were involved in the research areas of computer vision, image processing, data science and machine learning. The event was held between the 11th – 15th of July and consisted of a full week of lectures from a variety of researchers who are experts in the field on a wide array of computer vision topics with an emphasis on the latest trends that are pushing the field forward. In addition to the lectures, there was also a programming workshop and social activities in the evenings such as a computer vision-themed pub quiz and a dinner held at Norwich Cathedral.

As the lectures covered a wide range of topics not all of them were strictly relevant to my own PhD project and research interests, although it was useful to be exposed to these other areas to gain some tangential knowledge. The event started with a lecture by Ulrik Beierholm on cognitive vision, how it functions and how it compares and contrasts with similar computational vision systems such as Convolutional Neural Networks (CNNs). As my own background is in cognitive psychology and computational neuroscience, I found the lecture very engaging, even if it was mainly reiterating ideas I had already studied during my Masters’ degree. The afternoon of the first day was given to a programming workshop where we were given tasks on a Google Colab document to help familiarise ourselves with using PyTorch and also programming some of the key parts of a deep learning model pipeline. Although these were fun and useful tasks, we were not given enough time to complete them as much of the first half of the workshop was taken up with technical issues in setting up the guest accounts to use the labs computers.

The second day started and finished later than the first, with more lectures and an event in the evening, a structure followed throughout the rest of the summer school. The first lecture of the day was on Colour by Maria Vanrell Martoreli. Going into this lecture with no expectations I came out of it having found it very useful, with a much deeper understanding of the role of colour in the interpretation of objects in an image, both in human and machine vision systems. These were followed by lectures on image segmentation by Xianghua Xie and local feature descriptors by Krystian Mikolajczyk.

The image segmentation lecture presented some of the latest methods being used as well as some of the common problems and pitfalls encountered by researchers implementing these methods. While these two lectures presented a lot of well-articulated ideas in their respective areas, they were fell out of my own research interests so I don’t think I got as much value out of them as others in the room.

The last lecture of the day was a rather densely packed overview of deep learning for computer vision by Oscar A Mendez. This was a very engaging lecture and with a lot of information including some good refreshers on more fundamental architectures such as MLPs and CNNs and a very intuitive introduction to Transformers, a rather complex deep learning model which are currently very popular in many research areas. In the evening we went into Norwich city centre for a bowling social event.

Wednesday morning consisted of lectures on Shape and Appearance models by Tim Cootes and Uncertainty in Vision by Neill Campbell. Both of these were conducted online over teams due to the presenters having caught covid after attending the CVPR conference the previous week. The shape and appearance models lecture was informative but not of much particular interest to me but the uncertainty in vision was quite interesting and the presenter managed to include a good level of audience engagement activities despite being over a webcam.

After lunch we had a lecture on generative modelling by Chris Willcocks. This was a very interesting lecture as it covered the current trends in generative modelling (e.g., GANs, Transformers) and also looking at the architectures which have the potential to be the future of the field such as diffusion and implicit networks. As my own work looks at GANs, I found this talk to be particularly enlightening and also comforting as it agreed with many of my own arguments, I include in my thesis such as the current issues with using FID as an evaluation metric. In the evening we attended a dinner at Norwich Cathedral which gave everyone a good time to network and discuss the week’s events with other members of the summer school.

Thursday consisted of another full day of lectures on various topics in Computer Vision. These were Unsupervised learning by Christian Rupprecht, Structured generative models for vision by Paul Henderson, 4D performance capture by Armin Mustafa, Egocentric vision by Michael Wray and becoming an entrepreneur by Graham Finlayson. At this point in the week, I was starting to become a little overwhelmed by the amount of information I had taken in on a range of highly technical topics. I think that it could have been more beneficial to have a slightly less dense schedule of lectures and mix in some workshops or seminars to fully take in all of the presentations. Despite this, I did find a lot of value in the lectures on this day, particularly the unsupervised learning lecture in the morning. The evening social event was a relaxed computer vision pub quiz, with a mix of themed questions about computer vision, AI, local and general knowledge. This was again a good time to get to know the other attendees and I thoroughly enjoyed it despite missing out on first place by a couple of points (I blame that on the winning team having a local).

Friday morning consisted of the last couple of lectures of the event. The first one, Art and AI by Chrisantha Fernando being particularly insightful and perhaps my favourite of the week. This lecture, by a Deepmind researcher looked at the state-of-the-art generation models such as Dall-E and asked whether an AI could actually create something more than a picture but what we would consider real “Art”. To examine this idea the speaker then dissected what we mean by the term “art” and emotions using computational terms and ideas and discussed the possibility of AI art through this viewpoint. I found the mix of cognitive science, computer science and philosophy to be very engaging as this cross section of AI is where my own passion for the subject lies.

After the event finished at midday, I met some of the speakers, organisers and attendees for lunch to chat and reflect on the week. Overall, I found the summer school very enjoyable, if not a little bit lecture heavy, and would definitely attend again. I came back from the trip eager to try out some of the more intriguing models and architectures discussed and I will also be going back over some of the key slides when they are released.

 

 

Legit Conference or Scam?

post by Peter Boyes (2018 cohort)

“I’m embarrassed and disappointed.” That was the opening line of an email to my supervision team and CDT administrators. This email was a reaction to attending a virtual conference in November to present a paper and hear about research in one of the fields that my PhD spans. The conference I had submitted to and was attending, thankfully virtually, appeared to be some sort of scam. I managed to be the first presenter on the first day, but it unravelled after that. A shamefully quick search online reveals “The World Academy of Science, Engineering and Technology or WASET is a predatory publisher of open access academic journals. The publisher has been listed as a “potential, possible, or probable” predatory publisher by Jeffrey Beall and is listed as such by the Max Planck Society and Stop Predatory Journals.” (https://en.wikipedia.org/wiki/World_Academy_of_Science,_Engineering_and_Technology, accessed 18th November 2021).

In this post I’ll talk through some of the process of writing up and submitting the paper, as many of my peers have done, but will add in some detail of the course my experience ended up taking.

The paper would have been my first, it was a write up of the motivations, method, and findings of the first study in my PhD. This was an exploratory interview study with university project management group members around the decision making process for two capital projects that were (at the time of starting) recently completed at the university. The motivation for the exploration was to inform the next stage of the PhD that would form the bulk of my thesis.

The paper was co-authored by my supervision team. The most input came in supervision meetings, rather than a cowritten/collaborative document from some group projects, they contributed throughout with guidance in the study; the design stage, sounding out planning; the analysis of interview data, namely talking through emerging themes and subthemes, then a second round of analysis; then reviewing a couple of drafts as it was being written. The writing feedback reflected the materials shared with them. The first was mostly a skeleton highlighting a structure that could be used to explore the study, from background and motivation through to methodology, results, analysis and importantly for this study the discussion and future work. Second review was larger with a few of the themes and subthemes drafted. Key comments that came back included refining the lengthy sections into manageable portions with a clearer narrative to them. The main input was adding some more summary subjective qualitative analysis to the themes, partly to save those skimming through from needing to read all the excerpts shared and re-treading my whole journey, and partly to introduce some more subjectivity, some opinion to the data I was presenting, something I needed a push to do with my rather quantitative background in mathematics. Finally, a cut down to reduce the length, cutting some repetition and essentially waffle that had crept in when drafting the sections and finding the narrative. Their guidance on my first paper was hugely valuable and informed some of the earlier design stages of my second study.

As mentioned earlier in this post, the paper was a write up of my first study, an exploratory one that laid the foundation for some more directed reading into the literature, and ultimately to the main study I am designing/carrying out now based around group decision making. This main study is addressing one of the future directions of research suggested by the paper in incorporating metadata and context to data that is presented to decision makers.

I was finishing up my first study and getting stuck into the write up process, looking for a suitable place to try and publish or conference to try and engage with, both to present and to find people doing research in the same domain as me to hear from. There was some self-imposed pressure after not finding or picking one out earlier in the study process to find a conference to submit to. I jumped at what looked like a great opportunity, it was a tailor-made conference for my paper/study, “Decision Theory and Decision Support Systems Conference”.

WASET have a paper submission and feedback platform on their website. You create a log in, submit your details and paper or abstract drafts for any of their conferences, and then communication with the organisers are done with messages on platform rather than over emails. All seemed easy to me, just some administrative boxes to tick. These messages and the platform cover most communication: paper has been sent off for reviewer comments; submission status to check back in on; updates or reuploads of submissions with revisions. My submission was initially under their abstract-to-full-paper option, it worked well to get a deadline to sort out my paper to around 80% done and the abstract off to them. This came back quickly, the reviewing of abstract had been completed and I could upload my full paper when ready with the second deadline now in place. That was submitted a couple of weeks later. I had feedback from a moderator on the platform that there were no formatting issues and that they may be in contact after further reading. Comments from reviewers eventually came back and were about removing the questions in the discussion section, and the pieces lifted from the main body introduction section into the abstract. This seemed like minimal feedback, a little odd as from hearing what some of my peers had been through, I was surprised at small changes. They seemed stylistic requests of the conference, but some rephrasing and tidying up and I was done. The paper was accepted. Chalked it up to a strange one but happy to have my first paper in somewhere and what felt like the home straight with the conference and presentation itself left while I cracked on with more design for my next study.

The dates advertised on the conference site for final deadlines kept rolling, but I sat fine with it as I’d done final edits to my paper and the “camera ready” version was in. I presumed it was undersubscribed and they were trying to get some more interest and papers submitted in the closing months and weeks. In the week ahead of the conference I set aside some time and reread my paper, pulled together a presentation for the conference with a few presenter notes, and did a small run-through so I would be ready on the day.

It rolled around, I was excited, it wasn’t my first virtual conference in the pandemic, I had attended a couple already such as GISRUK, and it wasn’t my first time presenting online as I had done so for a few internal presentations with the Mixed Reality Lab, and the Horizon CDT retreat. Both of these fell at an earlier stage of this study so in a way I’d practiced talking about this topic and fielded some questions already on the study design, the potential research impacts, and how it all fitted into the larger picture of my PhD. I received the meeting link on the morning, exact proceedings hadn’t been released which was a bit odd, but presenters had been grouped and I knew I was in the first wave before break 1 on day 1. It was supposed to be a 2-day conference with 3 groupings of talks across each one, and links to e-posters sent round for looking at outside of this time. It became apparent quickly that the conference, a rather refined area of “Decision Theory and Decision Support Systems” was being run as part of a series of concurrent conferences by WASET, some entirely unrelated to decision making or support systems. This again seemed odd but not a pressing issue as I haven’t got that much experience with conferences particularly not smaller ones. I thought maybe this is how they can be run. The issue became apparent when the zoom call started and I could only see one other name on the participants list from those I expected to see in my block. The other names were from the concurrent conferences, they were in the same call and room as me. I checked my link and it appeared to be the correct one, I was unsettled but trying to focus on being ready to present. The session chair opened up and read a running order, I was up first.

After I finished presenting, and the floor was open for questions, it collapsed. People were asking not about my study but why they were hearing about capital project management groups and decision making, and not what they were there to present on. The chair was pushing to move on to the next presenter. A quick search online and I found the Wikipedia article on WASET mentioned earlier in this post and a few other blogs about peoples’ experiences with the conferences and attendees being frauds. I exited the call quickly.

Hindsight is a wonderful thing, looking back at the process there were little indicators that something might not be quite what I’d hoped. Maybe I was distracted by a desire to get that first paper over the line and accepted, a badge on my sleeve and a boost of confidence for the next stage of my PhD. Maybe it should be chalked up to inexperience.

I wouldn’t wish for anyone else to go through this, particularly other early career researchers and PhD students. This still afforded me the opportunity to get on with writing up a study that could have sat in draft notes for months while I carried on with other research, the chance to go through the steps of writing up and submitting with my supervision team, albeit for a dud, receiving feedback and editing, and forming a presentation and presenting to an audience. It is a shame it had to happen this way and I am looking forward more now to writing up and submitting my next study. I hope my experience prevents someone else from falling foul of this sort of scam.

You can read our paper here: The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups.