Insights from the Oxford Machine Learning Summer School

post by Gift Odoh (2022 cohort)

Between the 13th and 16th of July, 2023, I attended the Oxford Machine Learning Summer School at the Mathematical Institute of the University of Oxford for health applications. The course organised by the AI for Global Goals in partnership with the University of Oxford’s Deep Medicine and CIFAR, was focused on advanced areas in machine learning (ML), ranging from statistical and probabilistic approaches to representation learning (an ML approach based on representations of data that make it easier to extract useful information when building classifiers or other predictors [1]), specialised techniques for complex data structures, computer vision, knowledge representation and reasoning, and the integration of symbolic and neural approaches for enhanced AI capabilities.

My interest in the school was from the opportunity for exposure to valuable exploration into ML’s diverse applications and expectations towards uncovering connections between ML techniques and their relevance to my PhD research, which focuses on robotic teleoperation and human-robot interaction, particularly concerning mental workload indicators and how they can inform robotic assistance schemes in teleoperation. I also saw it as an opportunity to meet people of similar interest in the field and visit the renowned city of Oxford and its University of Oxford colleges, some known to have rich histories.


The first couple of sessions focused on how we understand our environment as humans – covering how we represent the world and its actual truth from different observations. These sessions paved the way for representation learning and how intelligent systems can extract useful information from features present in data, particularly when there are no labels. S. M. Ali Eslami, in his session on representation learning without labels, underscored the importance of labels to effective machine learning but demonstrated how learning can still be achieved when label collections are impossible by showing how different encoders make representations (understanding) of data from inputs as well as how this is reversed though generative models that make real-world estimates of this representations. While most sessions focused on probabilistic models based on generative techniques and casual machine learning, which focus on the learning process, Professor Pietro Lio from Cambridge presented an intriguing session on graph representation learning, which is a form of machine learning useful for organised data in the form of networks or graphs where points of data (nodes) are connected with edges (relationships), making an interesting case for utilising graphs as they are everywhere in research. Although most of the application areas were in molecule generation for proteins and drugs, its application in extracting meaningful insights and predictions from relational data can be applied to model robotic assistance schemes that respond to mental workload within a complex framework where nodes can represent operators’ mental states such as attention levels, stress levels or task demands and the edges can signify the relationship and dependencies between them.

Another key aspect of the course was computer vision for ML. Some of these sessions were on the evolution of computer vision and its techniques and unsupervised visual learning for ML applications, particularly medical imaging. Understanding the progress of computer vision and where it stands today has practical implications for my work, given that vision is integral to teleoperation interaction. Christian Rupprecht presented the stages for understanding a scene, including scene classification, where the general scene is described; object detection, in which various objects in the scene are identified; segmentation, which involves dividing the scene into meaningful, distinct parts and regions; scene graph, which describes the positional relationship between objects; description in which an improved interpretation of the scene is obtained and hierarchy which informs the how scenes are decomposed into objects, parts and materials.

It was, however, useful that the summer school was not just about machine learning techniques in isolation. The segment on Bridging Machine Learning and Collaborative Action Research emphasised the importance of collaboration, especially in areas like digital mental health. For example, the limitations of applying findings from social media data for health states generalisation, methodological issues, challenges understanding other attributes (e.g. offline attributes) and threats of relying on single data sources. This challenge emphasises the indispensability of interdisciplinary collaboration, which resonated deeply with my belief in merging human-robotic interactions with other disciplines for a more holistic approach to tackling the interdepending challenges of robotic assistance in teleoperation. Although it seemed to me that some of the techniques were unique in their approach and application to specific conditions, I see an opportunity for careful examination into how some approaches could come together to enhance robotic autonomy and facilitate better human-robot interaction.

In conclusion, the school has added depth to my understanding by expanding my academic horizon to approach my research through the sessions, including those that felt directly applicable and the seemingly marginally relevant ones. It is also noteworthy that the school was also an opportunity to meet other PhD students from diverse backgrounds and corners of the world. Our interactions provided valuable global perspectives on the various ML applications in health research. I must also add that I had the opportunity to explore the historic city of Oxford and its renowned University of
Oxford colleges both through guided tours and lone walks, which offered a cultural immersion and ignited a sense of academic inspiration. My interactions with students, researchers and industry professionals allowed me to forge meaningful connections in machine learning that broadened my understanding and opened doors to potential future collaborations and opportunities. Overall, it was a transformative learning experience that equipped me with a global network to renew my sense of purpose in my research and professional journey.

References
[1] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans Pattern Anal Mach Intell, vol. 35, no. 8, pp. 1798–1828, 2013, doi: 10.1109/TPAMI.2013.50.

AI, Mental health and the Human

post by Shazmin Majid (2018 cohort)

Pint of Science 2022 – Bunkers Hill, Nottingham

Venue:
Pint of Science 2022
Bunkers Hill, Nottingham

I delivered a talk about AI, mental health and the human at Pint of Science 2022 this year which had the theme “A Head Start of Health”. Pint of Science is a grassroots non-profit organisation that runs a worldwide science festival and brings researchers to a local pub/café/space to share their scientific discoveries with you, where no prior knowledge is needed. There are over 24,000 attendees in the U.K with over 600 events in over 45 cities. There were three talks at the time focusing on the theme of mental health.

Structure of the talk:

    1. What is AI
    2. How AI is being used in mental health
    3. AI and mental health: my cool experiences
    4. My current issues with AI and mental health

After days of practice and even delivering the jokes on cue whilst in pj’s in the comfort of my living room, the day for presenting arrived. Those that know me, know that I’m not too shy when it comes to presenting but this felt different and I really wanted to get the crowd engaged, and practise good storytelling. I arrived on the day and was welcomed, especially by fellow Horizon-er Peter Boyes who was the one who suggested my talk to the Pint of Science crew. I learnt that I would be the last talk and I did something I have never done before, I walked up to the bar and ordered a big old pint, a packet of crisps and enjoyed the wait.  Normally, I would find this process to be mildly agonising, having to wait until it’s your go. My parents have got a collection of photos of me when I was a child having to wait for a funfair ride. Let me set the scene – fists in a ball screaming at the top of my lungs. I guess that never leaves you which is why I’d much rather go first. The pint helped.

My talk aimed at providing a whistle-stop tour of the ways I’ve interacted with AI and mental health. To start off by loosely introducing AI, providing some of the state of the art ways that it’s being used, provide a summary of the ways I’ve got to engage in the sector and present what I consider to be current issues on this. I can say, this is not how it went down. I was approximately 3 slides in and then was hit with an image that’ll never leave me and this was a black screen with the text “slide show ended”. And it was right at this moment that I realised that I had sent over some butchered version of my slide show. I had only one copy of the slides which I had sent over – how could this happen! I also realised that I had saved the slideshow on my *desktop* (like, seriously, who does that!) with no remote drive links sprinkled in fairy dust to access it. A sudden wave of appreciation of being last hit me like a wave because the crowd just bobbed along as on average everyone was around 3 pints down!

Pete and I scrambled in the corner to find another presentation I could quickly deliver and we settled at an older MRL lab talk about a piece of research I had published. This piece of work explored the extent of user involvement in the design of mental health technology And lo and behold, the new structure:

The new structure of the talk

    1. Background of mental health technology
    2. The research questions
    3. The method of exploration
    4. Our results
    5. What we recommend for the future

Getting into the nitty-gritty:

    1. Background of mental health technology

Self-monitoring applications for mental health technology are increasing in numbers. The involvement of users has been informed by its long history in Human-Computer Interaction (HCI) research and is becoming a core concern for designers working in this space. The application of models of involvement,  such as user-centered design (UCD), is becoming standardised to optimise the reach, adoption and sustained use of this type of technology.

    1. The research questions

This paper examined the current ways in which users are involved in the design and evaluation of self-monitoring applications, specifically for bipolar disorder by investigating three specific questions a) are users being involved in the design and evaluation of technology?  b) if so, how is this happening? and lastly, c) what are the best practice ‘ingredients’ regarding the design of mental health technology?

    1. The method of exploration

To explore these practices, we reviewed available literature for self-tracking technology for bipolar disorder and made an overall assessment of the level of user involvement in design. The findings were reviewed by an expert panel, including an individual with lived experience of bipolar disorder, to form best practice “ingredients”  for design of mental health technology.  This combines the already existing practices of patient and public involvement and human-computer interaction to evolve from the generic guidelines of UCD to ones that are tailored towards mental health technology.

    1. Our results

For question a), it was found that out of the 13 novel smartphone applications included in this review, 4 self-monitoring applications were classified as having no mention of user involvement in the design, 3 self-monitoring applications were classified as having low user involvement, 4 self-monitoring applications were classified as having medium user involvement and 2 self-monitoring applications were classified as high user involvement. In regards to question b), it was found that despite the presence of extant approaches for the involvement of the user in the process of design and evaluation, there is large variability in whether the user is involved, how they are involved and to what extent there is a reported emphasis on the voice of the user, which is the ultimate aim of design approaches involved in mental health technology.

    1. What we recommend for the future

As per question c), it is recommended that users are involved in all stages of design with the ultimate goal to empower and create empathy for the user. Users should be involved early in the process of design and this should not just be limited to design itself, but also associated research ensuring end-to-end involvement. The communities in the healthcare-based design and human-computer interaction design need to work together to increase awareness of the different methods available and encourage the use and mixing of the methods, as well as establish better mechanisms to reach the target user group. Future research using systematic literature search methods should explore this further.

Closing remarks

Adaptability is the moral of the story here! Practice can make perfect but in the end, technology failed me even though my talk was about technology – ironically! I guess I was more proud of delivering the talk in this haphazard way, compared to if I delivered on cue like I practised. Another reflection that I made is that after 4 years of doing a PhD, it’s interesting how you can naturally talk about the topic at hand – so rambling for 20 mins just flowed. Talking about your PhD for a non-technical audience was also a very interesting experience too and a great experience to practise good storytelling.

 

 

Measure and track your mood with smart clothes

post by Marie Dilworth (2017 cohort)

Have you ever thought about what it would be like to wear a t-shirt that measured your emotions and your mood?

One day this might be a reality!

We are running an online survey to understand what people think about emotion-tracking smart clothing.

We would love to know what you think about the idea.

If you can it will take 10-15 minutes to fill out this survey to support PhD Research.

This research is being run by:

  • University of Nottingham, School of Computer Science and
  • Nottingham Biomedical Research Centre, Mental Health Technology

Survey:
https://nottingham.onlinesurveys.ac.uk/would-you-wear-mood-measuring-smart-clothes

 

Thank you for giving your time to support mental health technology research!

Marie Dilworth
PhD Candidate
School of Computer Science
University of Nottingham

 

https://www.nottingham.ac.uk/research/groups/mixedrealitylab/

https://nottinghambrc.nihr.ac.uk/research/mental-health