Diverse Intelligences Summer Institute 2023 Reflective Report

post by Favour Borokini (2022 cohort)

From June 25th – July 15th, 2023, I attended the 6th annual Diverse Intelligences Summer Institute (DISI) Summer School at the University of St Andrews. The institute aims to foster interdisciplinary collaborations about how intelligence is expressed in humans, non-human animals, and artificial intelligence (AI), among others.

I was excited to attend the Summer Institute due to my interest in AI ethics from an African and feminist perspective. My current PhD research focuses on the potential affordances and challenges avatars pose to African women. As AI is now often implicated in the creation of digital images, I thought DISI was a great environment to share ideas and insight into how to conceptualise these challenges and opportunities.

The attendees were divided into two groups: Fellows and Storytellers. Fellows were mostly early career researchers from diverse fields, such as cognitive science, computer science, ethnography,  and philosophy. The Storytellers were artists who created or told stories and had in their number an opera singer, a dancer, a weaver, a sci-fi author, a sound engineer and many others. The Storytellers brought spontaneity and life to what would surely have been a dreary three weeks with their creativity and their ability to spur unselfconscious expression in all the participants.

DISI 2023 began on a rainy evening, the first of several such rainy days, with an icebreaker designed to get Fellows and Storytellers to get to know each other. In the following days, we received a series of engaging lectures on topics as varied as brain evolution in foxes and dogs, extraterrestrial intelligence, psychosis and shared reality and the role of the arts in visualising conservation science. A typical summer school day had two ninety-minute lectures punctuated by two short recesses and a longer lunch break.

The lecture on Psychosis and Shared Reality was given by Professor Paul Fletcher, a Professor of Neuroscience from the University of Cambridge who had advised the development team of Hellblade, a multi-award-winning video game that vividly portrayed mental illness. This game put me in mind of several similar research projects ongoing at the CDT researching gaming and the mind. As a Nigerian, I reflected on the framing of psychosis and mental illness in my culture and the non-Western ways these ailments were treated and addressed. That first week, I was quite startled to find that two people I had spoken casually with at dinner and on my way to St Andrew’s were Faculty members. One of these was Dr. Zoe Sadokierski, an Associate Professor in Visual Communication at the University of Technology, Sydney, Australia, who gave a riveting lecture on visualising the cultural dimensions of conservation science using participatory methods.

In that first week, we were informed that we would all be working on at least one project, two at the most (more unofficially), and there was a pitching session over the course of two afternoons. I pitched two projects: The first project was to explore the aspirations, fears and hopes of my fellow participants using the Story Completion method, a qualitative research method with roots in Psychology, in which a researcher elicits fictional narratives from their participants using a brief prompt called a stem. This method helps participants discuss sensitive, controversial subjects by constructing a story told from the point of view of a stranger.

Many of the stories were entertaining and wildly imaginative, but I was particularly struck by the recurring anxiety that in 2073, the beautiful city of St Andrews would be submerged due to rising water levels. This seemed to me a reflection of how attached we had all become to that historic city, how attachment to places and things can come to help us care more.

For my second project, I and two friends (pictured below) interviewed six of our fellow DISI attendees for a podcast titled A Primatologist, a Cognitive Scientist and a Philosopher Walk into a(n Intergalactic) Bar. The idea was to get artists and researchers to tell an ignorant but curious alien on a flying turtle planet called Edna about their work and the Earth. These interviews sparked amazingly unintended reflective conversations about the nature of life on earth, our relationship with nature and human values, such as honesty. On the final day, we put together an audio trailer for some of the most insightful parts of these conversations as our final presentation.

Photo of our Podcast team. L-R: Antoine Bertin, Favour Borokini, & Matthew Henderson. #TeamEdna

Prone to being critical, I often felt disconcerted by what I perceived as an absence of emphasis on ethics. Having worked in technology ethics and policy, I felt prodded to question the impact and source of a lot of what I heard. In a session on the invisibility of technology, I felt extremely disturbed by the idea that good technology should be invisible. In fact, I felt that invisibility, the sort of melding into perception described as embodiment by postphenomenology, spoke more to efficiency than “good”, bearing in mind use cases such as surveillance.

There were some heated conversations, too, like the one on eugenics and scientific ethics in research. The question was how members of the public were expected to trust scientists if scientists felt ethically compelled not to carry out certain types of research or to withhold sensitive findings obtained during their research.

And the session on questioning the decline in “high-risk, high-return research”, which seemed, unsurprisingly, focused on research within the sciences, led to comments on funding cuts for social sciences, arts and the humanities resulting from the characterisation of these fields as low-risk and low-return, causing me to reflect that, ironically, the precarity of the latter, qualified them more as tagged high-risk, at least, if not high-return.

But the summer school wasn’t all lectures; and there were numerous other activities, including zoo and botanical garden trips, aquarium visits, beach walks, forest bathing and salons. During one such salon, we witnessed rousing performances from the storytellers amongst us in dance, music, literature and other forms of art.

An evening beach bonfire with a Frisbee game
Favour and two “dudes” at the entrance to the Edinburgh Zoo

I also joined a late evening expedition to listen to bats, organised by Antoine, one of my co-podcast partners. There was something sacred about walking in the shoes of the bats that evening as we blindfolded ourselves and relied on our partners to lead us in the dark with only the sense of touch, stumbling, as a small river rushed past.

I think the process of actually speaking with my fellow attendees caused me to feel warm towards them and their research. I believe ethics is always subjective, and our predisposition and social contexts impact what we view as ethical. At DISI, I found that ethics can be a journey, as I discovered unethical twists in my perspective.

It was my first time at the beach!
At the St Andrews Botanical Garden

This thawing made me enjoy DISI more, even as I confirmed that I enjoy solitary, rarefied retreats. As the final day drew near, I felt quite connected to several people and had made a few friends, who I knew, like the rarefied air, I would miss.

The success of DISI is in no small part due to the effort of the admin team, Erica Cartmill, Jacob Foster, Kensy Cooperrider, and Amanda McAlpin-Costa. Our feedback was constantly solicited, and they were quite open about the changes from last year.

I had a secret motive for attending. My research’s central focus is no longer AI, and I felt very out of place not having something I thought was core to the theme. But a conversation with Sofiia Rappe, a postdoctoral Philosophy and Linguistics Fellow, led to the realisation that the ability and desire to shapeshift is itself a manifestation of intelligence – one modelled in many non-human animals, reflecting awareness and cognition about how one fits in and how one should or ought to navigate their physical and social environment.

I look forward to returning someday.

With my friend Khadija, on the last day
After Cèilidh-ing, with Mia and Paty

You can listen to our podcast here: SpaceBar_Podcast – Trailer 

The Intersection of AI and Animal Welfare in Cat Royale: A Reflection on Public Engagement as a Computer Vision Expert

post by Keerthy Kusumam (2017 cohort)

Recently, I had the chance to be a part of an interview video that focused on my role as a computer vision expert in the upcoming project, Cat Royale, developed by Blast Theory. This project aims to explore the impact of AI on animals and specifically, cats. As a computer vision expert, I was thrilled to share my work and knowledge with the audience.

Reflecting back on the experience, I realize that my main aim for the video was to educate the public about the use of computer vision technology in animal welfare. The field of animal welfare has always been close to my heart, and I saw this opportunity as a way to demonstrate the impact that technology can have in this area. The Cat Royale project is a unique and creative way to showcase the application of computer vision technology in animal welfare, and I wanted to highlight this aspect of the project in the video.

The target audience for the video was the general public with an interest in technology, AI, and animal welfare. To reach this audience, I had to consider and adapt my language and presentation to suit their level of understanding and interest. I broke down the concept of computer vision technology and its application in the Cat Royale project into simple terms that could be easily understood by everyone. I also emphasized the importance of involving experts in animal welfare in the design of the project to ensure the comfort and safety of the cats.

In the video, I discussed how the computer vision system in Cat Royale measures the happiness of the cats and learns how to improve it. I highlighted the unique design of the utopia created for the cats, where their every need is catered for, and how the computer vision system understands the activities to make the cats happier. I explained that the ultimate goal of the project was to demonstrate the potential of computer vision technology in improving animal welfare.

One of the biggest challenges I faced in the video was ensuring that I provided enough technical detail for the audience to understand the concept of computer vision technology, while also keeping it simple enough for a general audience to grasp. To achieve this balance, I used analogies and examples that related to the audience’s everyday lives, making it easier for them to understand the concept.

It is important to note that people often assume that the computer vision system makes decisions about the happiness state of the cats. However, this is not the case. In fact, it is the cat experts who identify a list of behaviours that show the happiness state of the cats. The computer vision system can then reliably detect these behaviours, which inform the happy or not happy state of the cat.

In conclusion, the interview video was a great opportunity for me to share my work and knowledge with a wider audience and to spread awareness about the exciting possibilities of computer vision technology in the field of animal welfare. The Cat Royale project is a unique and creative way to showcase the application of computer vision technology in animal welfare, and I was thrilled to be a part of it. The experience has also given me a new perspective on the importance of adapting my presentation to suit my audience and ensuring that my message is effectively communicated.

AI, Mental health and the Human

post by Shazmin Majid (2018 cohort)

Pint of Science 2022 – Bunkers Hill, Nottingham

Venue:
Pint of Science 2022
Bunkers Hill, Nottingham

I delivered a talk about AI, mental health and the human at Pint of Science 2022 this year which had the theme “A Head Start of Health”. Pint of Science is a grassroots non-profit organisation that runs a worldwide science festival and brings researchers to a local pub/café/space to share their scientific discoveries with you, where no prior knowledge is needed. There are over 24,000 attendees in the U.K with over 600 events in over 45 cities. There were three talks at the time focusing on the theme of mental health.

Structure of the talk:

    1. What is AI
    2. How AI is being used in mental health
    3. AI and mental health: my cool experiences
    4. My current issues with AI and mental health

After days of practice and even delivering the jokes on cue whilst in pj’s in the comfort of my living room, the day for presenting arrived. Those that know me, know that I’m not too shy when it comes to presenting but this felt different and I really wanted to get the crowd engaged, and practise good storytelling. I arrived on the day and was welcomed, especially by fellow Horizon-er Peter Boyes who was the one who suggested my talk to the Pint of Science crew. I learnt that I would be the last talk and I did something I have never done before, I walked up to the bar and ordered a big old pint, a packet of crisps and enjoyed the wait.  Normally, I would find this process to be mildly agonising, having to wait until it’s your go. My parents have got a collection of photos of me when I was a child having to wait for a funfair ride. Let me set the scene – fists in a ball screaming at the top of my lungs. I guess that never leaves you which is why I’d much rather go first. The pint helped.

My talk aimed at providing a whistle-stop tour of the ways I’ve interacted with AI and mental health. To start off by loosely introducing AI, providing some of the state of the art ways that it’s being used, provide a summary of the ways I’ve got to engage in the sector and present what I consider to be current issues on this. I can say, this is not how it went down. I was approximately 3 slides in and then was hit with an image that’ll never leave me and this was a black screen with the text “slide show ended”. And it was right at this moment that I realised that I had sent over some butchered version of my slide show. I had only one copy of the slides which I had sent over – how could this happen! I also realised that I had saved the slideshow on my *desktop* (like, seriously, who does that!) with no remote drive links sprinkled in fairy dust to access it. A sudden wave of appreciation of being last hit me like a wave because the crowd just bobbed along as on average everyone was around 3 pints down!

Pete and I scrambled in the corner to find another presentation I could quickly deliver and we settled at an older MRL lab talk about a piece of research I had published. This piece of work explored the extent of user involvement in the design of mental health technology And lo and behold, the new structure:

The new structure of the talk

    1. Background of mental health technology
    2. The research questions
    3. The method of exploration
    4. Our results
    5. What we recommend for the future

Getting into the nitty-gritty:

    1. Background of mental health technology

Self-monitoring applications for mental health technology are increasing in numbers. The involvement of users has been informed by its long history in Human-Computer Interaction (HCI) research and is becoming a core concern for designers working in this space. The application of models of involvement,  such as user-centered design (UCD), is becoming standardised to optimise the reach, adoption and sustained use of this type of technology.

    1. The research questions

This paper examined the current ways in which users are involved in the design and evaluation of self-monitoring applications, specifically for bipolar disorder by investigating three specific questions a) are users being involved in the design and evaluation of technology?  b) if so, how is this happening? and lastly, c) what are the best practice ‘ingredients’ regarding the design of mental health technology?

    1. The method of exploration

To explore these practices, we reviewed available literature for self-tracking technology for bipolar disorder and made an overall assessment of the level of user involvement in design. The findings were reviewed by an expert panel, including an individual with lived experience of bipolar disorder, to form best practice “ingredients”  for design of mental health technology.  This combines the already existing practices of patient and public involvement and human-computer interaction to evolve from the generic guidelines of UCD to ones that are tailored towards mental health technology.

    1. Our results

For question a), it was found that out of the 13 novel smartphone applications included in this review, 4 self-monitoring applications were classified as having no mention of user involvement in the design, 3 self-monitoring applications were classified as having low user involvement, 4 self-monitoring applications were classified as having medium user involvement and 2 self-monitoring applications were classified as high user involvement. In regards to question b), it was found that despite the presence of extant approaches for the involvement of the user in the process of design and evaluation, there is large variability in whether the user is involved, how they are involved and to what extent there is a reported emphasis on the voice of the user, which is the ultimate aim of design approaches involved in mental health technology.

    1. What we recommend for the future

As per question c), it is recommended that users are involved in all stages of design with the ultimate goal to empower and create empathy for the user. Users should be involved early in the process of design and this should not just be limited to design itself, but also associated research ensuring end-to-end involvement. The communities in the healthcare-based design and human-computer interaction design need to work together to increase awareness of the different methods available and encourage the use and mixing of the methods, as well as establish better mechanisms to reach the target user group. Future research using systematic literature search methods should explore this further.

Closing remarks

Adaptability is the moral of the story here! Practice can make perfect but in the end, technology failed me even though my talk was about technology – ironically! I guess I was more proud of delivering the talk in this haphazard way, compared to if I delivered on cue like I practised. Another reflection that I made is that after 4 years of doing a PhD, it’s interesting how you can naturally talk about the topic at hand – so rambling for 20 mins just flowed. Talking about your PhD for a non-technical audience was also a very interesting experience too and a great experience to practise good storytelling.

 

 

Safe and Trusted Artificial Intelligence 2021

post by Oliver Miles (2018 cohort)

Over three days from 12th-14th July 2021, I attended and participated in the Safe and Trusted Artificial Intelligence (STAI) summer school, hosted by Imperial College and Kings College London. Tutorials were given by leading academics, experts from British Telecom (BT) presented a session on industry applications, and I along with several other PhD students took part in a workshop speculating on AI interventions within the healthcare setting, presenting our work back to the wider group. In the following, I’ll summarise key contributors’ thoughts on what is meant by ‘safe and trusted’ in the context of AI and I’ll outline the themes and applications covered during the school I found to be most relevant to my own work. Two salient lessons for me expanded on contemporary efforts to reconcile accuracy with interpretability in models driving AI systems, and on efforts to systematically gauge human-human/human-machine alignment of values and norms, increasingly seen as critical to societal acceptance or rejection of autonomous systems.

When I read or hear the term ‘Artificial Intelligence’, even in the context of my peers’ research into present-day and familiar technologies such as collaborative robots or conversational agents, despite tangible examples in front of me I still seem to envision a future that leans toward science fiction. AI has always seemed to me to be intrinsically connected to simplistic, polarised visions of utopia or dystopia in which unity with some omnipotent, omniscient technology ultimately liberates or enslaves us. So, when it comes to considering STAI, I perhaps unsurprisingly default to ethical, moral, and philosophical standpoints of what a desirable future might look like. I obsess over a speculative AI’s apparent virtues and vices rather than considering the practical realities of how such futures are currently being realised and what my involvement in the process might mean for both me and the developing AI in question.

STAI began by addressing these big picture speculations as we considered the first theme – ethics of AI. According to AI professor Michael Rovatsos, ethical AI addresses the ‘public debate, impact, and human and social factors’ of technological developments, and the underlying values driving or maintaining interaction’ (2021). In a broad sense there was certainly agreement that ethical AI can and should be thought of as the management of a technology’s impact on contentious issues such as ‘…unemployment, inequality, (a sense of) humanity, racism, security, ‘evil genies’ (unintended consequences), ‘singularity’, ‘robot rights’ and so on (Rovatos, 2021).  An early challenge however was to consider ethics as itself an issue to be solved; a matter of finding agreement on processes and definitions as much as specific outcomes and grand narrative. In short, it felt like we were being challenged to consider ethical AI as simply…doing AI ethically! Think ‘ethics by design’, or perhaps in lay terms, pursuing a ‘means justified end’.

To illustrate this, if my guiding principles when creating an AI technology are present in the process as much as the end product, when I think of ‘safe’ AI; I might consider the extent to which my system gives ‘…assurance about its behavioural correctness’; and when I think of ‘trusted’ AI; I might consider the extent of human confidence in my system and its decision making’ (Luck, M. 2021). A distinction between means and end – or between process and goal – appeared subtle but important in these definitions: While ‘assurance’ or ‘confidence’ appear as end goals synonymous with safety and trustworthiness, they are intrinsically linked to processes of accuracy (behavioural correctness) and explicability (of its system and decision-making rationale).

In her tutorial linking explainability to trustworthiness, Dr Oana Cocarascu, lecturer in AI at King’s College London, gives an example of the inclination to exaggerate the trustworthiness in some types of data-driven modelling that ‘…while mathematically correct, are not human readable’ (Cocarascu, O). Morocho-Cayamcela et al. (2019) demonstrate this difficulty in reconciling accuracy with interpretability within the very processes critical to AI, creating a trade-off between fully attaining the two end goals in practice (Figure 1).

My first lesson for ‘doing AI ethically’ is therefore the imperative to demonstrate accuracy and explainability in tandem and without compromise to either. However, it doesn’t follow that this alone will ensure safe and trusted outcomes. A perfectly accurate and interpretable system may lead to confidence in mechanism, but what about confidence in an AI’s apparent agency?

In her tutorial ‘AI, norms and institutions’, Dr Nardine Osman talked about the ‘how’ of achieving STAI by means of harnessing values themselves. She convincingly demonstrated several approaches employing computational logic (e.g. ‘if-then’ rules) in decision making algorithms deployed to complex social systems. The following example shows values of freedom vs safety as contingent on behavioural norms in routine airport interactions expressed as a ‘norm net’ (Fig.2).

Serramia et al. visualise their linear approach to ethical decision making in autonomous systems, positioning conventionally qualitative phenomena – human values (e.g. safety) – as contingent on and supported by societal norms, e.g. of obligation to provide passports/forms (2018). Efforts to break down and operationalize abstract norms and values quantitatively (e.g. weighting by hypothetical preference, observed occurrence) demonstrate how apparent features of human agency such as situational discernment might become more commonplace in negotiating safe and trusted outcomes.  My second lesson and main takeaway from STAI’21 was therefore the imperative of sensitising AI, and design of AI, to the nuances of social values – distinguishing between value preferences, end-goals, social norms and so forth.

Lastly and significantly, attending and participating in STAI’21 has given me invaluable exposure to the practicalities of achieving desirable AI outcomes. The focus on ‘doing AI ethically’ has challenged me to pursue safety, trustworthiness, and other desirable qualities in my own work – mechanistically in terms of ensuring explainability of my methods and frameworks; and substantively, in terms of novel approaches to conceptualising values and positioning against social norms.


References

Cocarascu, O (2021) XAI/Explainable AI, Safe and Trusted AI Summer School, 2021 https://safeandtrustedai.org/events/xai-argument-mining/

Luck, M (2021), Introduction, Safe and Trusted AI Summer School, 2021 https://safeandtrustedai.org/event_category/summer-school-2021/

Morocho-Cayamcela, Manuel Eugenio & Lee, Haeyoung & Lim, Wansu. (2019). Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions. IEEE Access. 7. 137184-137206. 10.1109/ACCESS.2019.2942390.

Osman, N (2021) AI, Norms and Institutions, Safe and Trusted AI Summer School, 2021 https://safeandtrustedai.org/events/norms-and-agent-institutions/

Rovatsos, M (2021) Ethics of AI, Safe and Trusted AI Summer School, 2021 https://safeandtrustedai.org/events/ethics-of-ai/

Serramia, M., Lopez-Sanchez, M., Rodriguez-Aguilar, J. A., Rodriguez, M., Wooldridge, M., Morales, J., & Ansotegui, C. (2018). Moral Values in Norm Decision Making. IFAAMAS, 9. www.ifaamas.org