Reflecting on my Journal Paper Submission

post by Matthew Yates (2018 cohort)

On 16th June 2022, my paperEvaluation of synthetic aerial imagery using unconditional generative adversarial network was accepted into the following August edition of the journal “ISPRS Journal of Photogrammetry and Remote Sensing”. This was my first paper to get published and also my first time going through the academic peer review process.

Although the paper has only just been published in summer 2022, the research began in 2019 with much of the initial work taking up a large portion of the first year of my PhD. The motivation for the paper was to take all the early research I had been doing for my PhD Thesis and produce a deliverable publication out of it. The reasons I was keen on doing this was that it would give me chance to get ahead of the Thesis writing end stage of the PhD as this paper would more or less cover the first empirical chapter, it would also help introduce me to the process of peer reviewing which I did not really have any prior experience with.

After delays due to the outbreak of COVID and the subsequent lockdown measures, the paper was submitted to the Journal of Machine Learning in summer 2019. The scope of the paper had been stripped back from the original ideas, with fewer models being benchmarked due to the inaccessibility to the Computer Vision GPU cluster at that time. After a couple of months of waiting, the Journal came back with a “Major Revisions” decision along with plenty of comments from the 3 reviewers. The reviewers deemed the paper to be lacking a substantial enough contribution to warrant publication at this stage and there was a general negative sentiment amongst the reviews. I then resubmitted a month later after responding to the reviewer’s comments and making large amendments to the paper only to get a rejection after the next round of reviews. As this was my first paper I had tried to get published I was rather disheartened in receiving this decision after spending so much time on the work. My supervisors were less concerned, having gone through the process many times, they told me this was fairly normal and it would be a good idea to submit to a different venue which may be more appreciative of the work.

In early spring 2021 I sent the paper to the ISPRS Journal of Photogrammetry and Remote Sensing. The decision I received here was another rejection, although this one came with much more constructive criticism and I was advised to revise and resubmit at a later date. As this was over a year on from my original submission attempt I had also been working on additional research for my PhD and then made the decision to incorporate some of these new results into the original paper, significantly increasing the contributions of the work. At this point my primary supervisor Mercedes Torres Torres, brought in Michael Pound to help add a new perspective on the work and give me some new feedback before resubmission. After submitting to his new journal again in October 2021 I was given a “major revisions” decision in December 2021, the reviewers who were a mix of old and new from the last failed attempt at the same journal had responded much more positively to the changes and additional content but thought it still required additional work and depth in some parts. Now in January 2022, I resubmitted, hoping that this would be the last time but received another round of corrections in April. At this point I was getting fairly fatigued with the entire process, having done the bulk of the work years ago and each round of revisions taking months. Luckily the reviews in this last round were positive and only one of the 3 reviewers called for additional work to be done. As I could see I was close to publication I went over all of this final reviewers’ comments in detail and responded accordingly as I did not think I could face another round of revisions and wanted to move on to other research. Luckily the next decision was an acceptance with all the reviewers now satisfied with the work.

The acceptance of the paper was a huge relief as it felt like the time and effort myself and my collaborators had put into the paper was finally vindicated. I was additionally pleased as this is my first publication, something I had been looking to achieve for a few years now. The paper also represents the first part of my PhD project and gives that whole stage of research more credibility now it has gone through the peer review process. Following this publication, I have now been invited to be on the list of reviewers for the journal if anything in my field is submitted. This is something I would be interested in doing to get an insight into the other side of the review process which could feel quite opaque at times. I have also been invited to publish further research in other related journals. These initial responses to the publication have shown that it was worth enduring through the rather lengthy and sometimes unpredictable process of peer reviews.

Outreach in the time of Covid

post by Luke Skarth-Hayley (2018 cohort)  

It’s tough to do outreach during a pandemic. What can you do to help communities or communicate your research to the public when it’s all a bit iffy even being in the same room as another person with whom you don’t live?

So, what can we do?

Well, online is obvious. I could certainly have done an online festival or taught folks to code via Zoom or Skype or Teams or something.

I don’t know about you, but I’ve got a serious case of online meeting fatigue. We’re not meant to sit for hours talking in tiny video windows. I’m strongly of the opinion that digital systems are better for asynchronous communications, not instantaneous.

So, what did I do?

I took my research online, asynchronously, through various means such as releasing my code as an open-source plugin for the Unity game engine and creating tutorial videos for said plugin.

Open-Source Software

What is open source? I’m not going to go into the long and storied history of how folks make software available online for free, but the core of it is that researchers and software developers sometimes want to make the code that forms their software available for free for others to build on, use, and form communities around. A big example of open-source software is the Linux operating system. Linux-based OSes form the foundation on which are built most websites and internet-connected services you use daily. So, sometimes open-source software can have huge impacts.

Anyway, it seems like a great opportunity to give back from my research outputs, right? Just throw my code up somewhere online and tell people and we’re done.

Well, yes and no. Good open-source projects need a lot of work to get them ready. I’m not going to say I have mastered this through my outreach work, but I have learned a lot.

First, where are you going to put your code? I used GitHub in the end, given it is one of the largest sites to find repositories of open-source software. But you might also want to consider GitLab, Source Hut, or many others.

Second, each site tends to have guidance and best practices on how to prepare your repository for the public. In the case of GitHub, when you’ve got research code you want to share, in addition to the code itself you want to:

      • Write a good readme document.
      • Decide on an open-source license to use and include it in the repository.
      • Create a contribution guide if you want people to contribute to your code.
      • Add a CITATION.cff file to help people cite the code correctly in academic writing.
      • Get a DOI (https://www.doi.org/) for your code via Zenodo (great guide here: https://guides.lib.berkeley.edu/citeyourcode) so it can be cited correctly.
      • Create more detailed documentation, such as via GitHub’s built-in repository wiki, or via another method such as GitHub pages or another hosted documentation website.
      • Bonus: Consider getting a custom domain name to point users to that redirects to your code or that hosts more information.

That’s already a lot to consider. I also discovered a couple of other issues along the way:

Academic Code

Academic Code, according to those in industry, is bad. Proving an idea is not the same as implementing it well and robustly, with all the tests and enterprise code separation, etc. That said, I have seen some enterprise code in my time that seems designed to intentionally make the software engineer’s job harder. But there is a grain of truth regarding academic code. My code was (and still is in places) a bit of a hacked together mess. Pushing myself to prepare it for open source immediately focused the mind on places for improvement. Nothing is ever done, but I did find ways to make the code more adaptable and flexible to diverse needs rather than just my own, fixed some outstanding bugs, and even implemented custom editors in Unity so non-programmers would be able to do more with the plugin without having to edit the scripts that underpin it. In addition to making the code better for others, it made it better for me. Funny how we do what’s best for ourselves sometimes under the guise of what’s best for others.

Documenting a Moving Target

No software is ever done. Through trying to break the Curse of Academic Code as above, I rewrote a lot of my plugin. I had already started documenting it though. Cue rewriting things over and over. My advice is split your docs into atomic elements as much as possible, e.g., if using a wiki use one page per component or whatever smallest yet useful element you can divide your system up into for the purposes of communicating its use. Accept you might have to scrap lots and start again. Track changes in your documentation through version control or some other mechanism.

Publicising Your Open-Source Release!

Oh no, the worst bit. You must put your code child out in the world and share it with others. Plenty of potential points to cringe on. I am not one for blatant self-promotion and rankle at the idea of personal brands, etc. Still, needs must, and needs must mean we dive into “Social Media”. I’m fortunate I can lean on the infrastructure provided by the university and the CDT. I can ask for the promotion of my work through official accounts, ask for retweets, etc. But in general, I guess my advice is put it out there, don’t worry too much, be nice to others. If people give you grief or start tearing apart your code, find ways to disambiguate real feedback from plain nastiness. You are also going to get people submitting bug reports and asking for help using your code, so be prepared for how you balance doing some of that with concentrating on your actual research work.

Tutorial Videos

Though I can quite quickly point back to the last section above, and say, “MOVING TARGET!” there is value in video tutorials. Even if you must re-record them as the system changes. For a start, what you see is what you get, or more accurately what you get to do. Showing people at least one workflow with your code and tools in a way that they can recreate is immediate and useful. It can be quick to produce, with a bit of practice, and beats long textual documentation with the occasional picture (though that isn’t without its place). Showing your thinking and uses of the thing can help you see the problems and opportunities, too. Gaps in your system, gaps in your understanding, gaps in how you explain your system are all useful to flag for yourself.

Another nice get is, depending on the platform, you might get an accidental boost to awareness of your work and your code that you’ve released through The Algorithm that drives the platform. Problems with Algorithms aside, ambient attention on your work through recommended videos e.g., on YouTube can be another entry point for people to discover your work and, their attention and interest willing, prompt them to find out more. This can be converted into making contacts from people who might want to use your tools, who might then participate in studies, or it may make people pay attention to your work, which you can use to nudge them into checking out something you are making with your tools, again making them into a viewer/player/potential study participant.

But how do you go about recording these things? Well, let’s unwrap my toolkit. First up, you’re going to need a decent microphone, maybe a webcam too. Depending on how fancy you want to get, you could use a DSLR, but those can be prohibitively expensive. Maybe your research group has one you can borrow? Same goes for the microphone. I can recommend the Blue Yeti USB microphone. It’s a condenser microphone, which means the sound quality is good, but it can pick up room noise quite a bit. I just use a semi-decent webcam with at least 720p resolution, but I’ve had that for years. This is in case you want to put your face on your videos at any point. Just how YouTube do you want to be?

Anyway, you have some audio and maybe video input. You have a computer. Download OBS Studio is my next recommendation. You can get it at https://obsproject.com/. This is a piece of software that you can use to record or stream video from your computer. It even has a “Virtual Camera” function so you can pipe it into video conferencing software like Zoom or Microsoft Teams. Cue me using it to create funny effects on my webcam and weird echoes on my voice. But, in all seriousness, this is a very flexible piece of freely available software that allows you to set up scenes with multiple video and audio inputs that you can then switch between. Think of it as a sort of home broadcasting/recording kit. It is very popular for people making content for platforms like YouTube and Twitch.tv. I’ll leave the details to you to sort out, but you can quite quickly set up a scene where you can switch between your webcam and a capture of your computer’s desktop or a specific application, and then start recording yourself talking through whatever it is you want to explain. For example, in my tutorial videos I set it up so I could walk through using the plugin I created and open-sourced, showing how each part works and how to use the overall system. Equally, if you aren’t confident talking and doing at the same time you could record your video of the actions to perform, and then later record a separate audio track talking through what you are doing in the video. For the audio, you might want to watch the video as you talk and/or read from a script, using something like Audacity, a free tool for audio recording you can download from https://www.audacityteam.org/.

Which brings me on to my next piece of advice. Editing! This is a bit of a stretch goal, as it is more complex than just straight up recording a video of you talking and doing/showing what you want to communicate. You could just re-record until you get a good take. My advice in that case would be to keep each video short to save you a lot of bother. Editing takes a bit more effort but is useful and can be another skill you can pick up and learn the basics of with reasonable effectiveness. Surprisingly, there is some excellent editing software out there that is freely available. My personal recommendation is Davinci Resolve (https://www.blackmagicdesign.com/products/davinciresolve/), which has been used even to edit major film productions such as Spectre, Prometheus, Jason Bourne, and Star Wars: The Last Jedi. It is a serious bit of kit, but totally free. I also found it relatively simple to use after an initial bit of experimentation, and it allowed me to cut out pauses and errors, add in reshot parts, overdub audio, and so on. This enables things like separating recording the actions you are recording from your voiceover explanation. Very useful.

Next Steps

My public engagement is not finished yet. My research actively benefits from it. Next up I intend to recruit local and remote game developers in game jams that use the plugin, specifically to evaluate the opportunities and issues that arise with its use, as well as to build an annotated portfolio as part of my Research through Design-influenced methodology.

Conclusion

So, there you have it. Ways I’ve tried to get my research out to the public, and what I plan to do next. I hope the various approaches I’ve covered here can inspire other PhD students and early-career researchers to try them out. I think some of the best value in most of these comes from asynchronicity, with a bit of planning we can communicate and share various aspects of our research in ways that allow a healthy work-life balance, that can be shaped around our schedules and circumstances. As a parent to a young child, I know I’ve appreciated being able to stay close to home and work flexible hours that allow me to tackle the unexpected while still doing the work. If there is one thing, I want to impress upon you, it is this: make your work work for you, on your terms, even if it involves communicating your work to tens, hundreds, thousands of people or more. Outreach can take any number of forms, as long as you are doing something that gives back to or benefits some part of society beyond academia.

You can get the Unity plugin here: https://github.com/lukeskt/Reactive-Mise-en-scene

My Internship at Capital One

post by Ana Rita Pena (2019 cohort)

Interning at Capital One

Between May and October 2021 I held a part-time internship with my Industry Partner, Capital One. Capital One is a credit card company that launched its operations in the UK in 1996 and has its parent company in the US. The company is known for being technology driven and in the specific UK case focusing on credit building cards as their main product.

My internship with Capital One UK consisted of working on several projects as part of their Responsible AI initiative due to my interest in topics related to “ethical” machine learning and FACct/FATE (Fairness, Accountability, Transparency and Explainability).

The Responsible AI initiative initially consisted of three projects: the Trustworthy Algorithm Checklist, Global Methods and Individual Level Methods. The Trustworthy Algorithm Checklist project was already under way when I joined the company in May 2021. The project consisted of creating a checklist for Model Developers to complete during the model development process, in order to instigate some reflection and mitigation of ethical risks and consequences associated with the new model. The Global Methods project was subdivided in two parts. Generally, the project aimed to evaluate different explainability methods and provide some guidance and recommendation on different methods to be adopted internally by the Data Science team. The first part consisted in interviewing stakeholders from different departments to have a better understanding about which information each of them needed about the model and the second part consisted of a technical evaluation of the tools. Finally, the third project, Individual Level Methods, aimed to explore how consumers understand different explainability methods to represent their individual outcome. This third project never went ahead due to lack of time.

On my day to day, I worked within the Data Science Acquisition team as my manager was based on this team and he spent a percentage of his time working on the Responsible AI Initiative, however my workflow was separate to the rest of the team. Being able to attend the team’s meetings allowed me to have a better understanding of the workings of the company and the processes involved in model development and monitoring.

In the following sections I will describe in more detail the two projects I worked on as well some general reflections and thoughts on the internship.

Trustworthy Algorithms Checklist

Over the last few years there have been several stories in the press of algorithms which are implemented and end up having unintended consequences which negatively affect users, for example the UK’s A-level grading algorithm controversy, which ended up affecting students from lower socio-economic backgrounds more negatively than other students by deflating their grades the most. This has led to research in the areas of “ethical” AI to cross into real world applications. The Trustworthy Algorithm Checklist Project aims to design a checklist which will make model developers actively reflect on unwanted impacts of the model they are building as part of the development process. After the checklist completion it would then go through an ethics panel composed of stakeholders from different business departments within the company, for an approval process. The initial draft checklist was divided into three sections: Technical Robustness, Diversity Non-discrimination and Fairness and, finally Accountability.

The next iteration of the checklist design created more sections which were based on the Ethical Principles for Advanced analytics and Artificial Intelligence in Financial Services created by UK Finance, which is a trade association for the UK banking and Financial Sector. Ending up with the following five sections: Explainability and Transparency, Integrity, Fairness and Alignment to Human Rights, Contestability and Human Empowerment and Responsibility and Accountability. It was at this stage that I joined the company and started working on this project. The project involved stakeholders from the legal and compliance department as well as from the data science department. This second iteration of the checklist was trialed with a model that was in the initial stages of development, from this trial it was noticed that most of the answers to the prompts in the checklist were mentioned to already be covered by existing Capital One Model Policy. In order to avoid the Checklist becoming just another form that needs to be submitted, the team decided to put more emphasis on the ethics panel discussion meeting and have the checklist being an initial prompt in the discussion in order to foster critical reflection which is aided by having stakeholders that come from different backgrounds and hence bring different perspectives.

While this project initially only focused on the algorithmic aspect of decision making, the team involved discussed the possibility of expanding the checklist to the process of development of Credit Policies. It is the combination of the Algorithmic Risk Assessment with the Credit Policy which will end up impacting the consumer and hence the need to critically evaluate both these parts.

Explainability Toolbox

Global methods are a set of tools and visualisations to help better understand the way complex machine learning models work at an overall, rather than individual level. These methods can focus on the variables, for example which variables have bigger impact on the result or how different variables are related among themselves, or the general decision rules of the model, for example summarise a complex method by approximating it to a set of simple decision rules.

Including explainability methods on the machine learning work process is quite important as these allow one to verify the behaviour of our model at any time and check if it is working well and as expected.

Figure 1 What Do We Want From Explainable AI?

When this project was first discussed with me it consisted of implementing and comparing different explainability methods and packages implementations (both developed internally as well as open-source tools) in order to propose a set of tools to uniformise the tools used by different teams within the Data Science Department. Due to my interdisciplinary experience in Horizon and partly working within the Human Factors field, I was aware that the technical components of the explainability methods are not the only factor that affect how well technology is adopted and used in an institutional setting. In order to address the social aspect of this task I suggested having a series of interviews with different Stakeholders that interact with the Data Science team, in order to have a better understanding of what information they needed on the models and would like to better understand and on their views on the methods that were currently implemented. Doing these interviews allowed me to understand better what different roles and departments did and how they interacted, as previously I had mainly only interacted with the Data Science department.

From the interviews I learned that stakeholders from more business-related roles as well as higher level roles were interested in being able to translate how changes in the model’s behaviour impact business, e.g. in terms of number of people given loans or in terms of profit or number of defaults. Stakeholders from the technical departments were also aware of the shortcomings of the methods they currently adopted but had not had the time to test alternatives in their workdays. From the interviews I created a list of guidance when presenting technical materials to stakeholders as well as identified several fields to evaluate on the second part of the project.

In the second part of the project, I compiled different packages/libraries (open source and internal packages developed by different C.O departments and countries) to test their methods and give guidance on what could be beneficial to implement across the Data Science Department. During this process I learned that different branches of C.O. in different countries use different coding languages as well as that different models from different Data Science teams had different characteristics due to their different needs and what historically their department and branch have implemented. This meant that oftentimes different teams had to create their packages from scratch to meet the specificities of their model, even if they were using the same tools which could have been avoided if there was uniformisation of the language or model construction approach.

Final Reflections

This was my first time working in Industry and I was very pleasantly surprised with the importance that Capital One puts in research, running internal conferences and having specialised research centres (in the US, which is their biggest market). This was further encouraged by the very open and collaborative work environment, for example the Responsible AI UK initiative I was involved with had regular meetings with other research teams within C.O. working in the same field.

While the company had very good intentions and initiates in different projects just like the ones I worked in, the reality of the scale of the UK branch meant that all of the team (apart from myself) only worked on the Responsible AI initiative 10% of their time on top of their team ’s roles. The Explainability Toolbox project also showcased the drive to optimise processes across departments, even if hard to accomplish at scale due to logistical constraints.

Overall, my internship at Capital One gave me a better understanding of the Consumer Credit Industry and the way different departments come together to be able to provide a financial product to the consumer.

A lesson in remote work and open-source success: exploring emotion classification with farmers at CIMMYT

post by Eliot Jones-Garcia (2019 cohort)

My journey with CIMMYT, the international maize and wheat improvement centre, began shortly before enrolling with the Horizon CDT. In February of 2019, after having recently graduated with an MSc in Rural Development and Innovation from Wageningen University, I found myself in Mexico and in need of a job. I was familiar with the organisation because of its pivotal role in delivering the Green Revolution of the 1960s; an era of significant technological advancement in agriculture enabled by seed breeding innovation achieved by CIMMYT scientists. Since then, they have branched out to engage in several areas of agricultural research, from climate change adaption and mitigation strategies to markets and value chains.

Thus, prior to beginning my PhD research in earnest, I spent 6 months conducting a systematic review of technological change studies for maize systems of the Global South. I worked at the headquarters in Texcoco and gained valuable experience among the various academic disciplines CIMMYT employees use to approach agricultural sustainability. Having forged strong relationships with management staff, they volunteered to support me in my move to Nottingham and transition in research toward ‘digital’ agriculture.

During my first year of research at Horizon, I worked with the CIMMYT staff to conceptualise an internship project. The plan was to head back to Mexico once again in the summer of 2020 to collaborate with scientists there. Unfortunately, however, the unexpected onset of COVID-19 forced me to change plans. At first the work was postponed in the hope the situation would ease but to no avail. I decided to undertake my internship remotely and part-time beginning in January of 2021. In hindsight I was incredibly pleased to have had the initial in-person experience but working at a distance would prove to have its own great lessons.

The goal of my work was to explore different methods of natural language processing, sentiment analysis and emotion classification for analysing interviews with farmers. COVID-19 had not only stunted my travel plans, but all CIMMYT researchers were finding it hard to get to farmers to collect data. These interactions were increasingly taking place remotely, via mobile phones. This removed a significant interpersonal dimension from the research process; without supporting visual context, it became difficult to understand affective elements of conversation. I was given access to a series of interviews with different agricultural stakeholder that had been manually coded according to their use of technology and charged with finding out how these digital tools might aid in analysing audio and textual data.

I approached the task exploring the grounding literature. The first major insight from my internship became how to turn around a thorough and well-argued review for motivating a study in a short time, whilst providing a good understanding for myself and the reader. This yielded a variety of approaches to defining and measuring emotion, selecting audio features for analysis, and modelling tools. I ended up taking a conventional approach, using ready-made R and Python tools and lexicons to analyse text, and a series of widely available labelled datasets to train the model. The second insight from my internship was to engage with different open-source communities and apply available tools to achieve my desired goal.

In combination with working remotely, these activities gave me great confidence to independently deal with tasks, to seek and gain certain skills and to utilise them with support of experts to a high degree of quality. More than anything I feel like this internship taught me to apply my academic abilities to unpack and explore problems in a concise and specific way and to deliver what CIMMYT want in the form of outputs, that is actionable insights that can be applied in a Global South context, and for motivating future research.

In light of this, I produced a structured analysis and report for my CIMMYT supervisors which was then published as an internal discussion paper in December of 2021. Findings from the study indicate that sentiment analysis and emotion classification can indeed support remote interviews and even conventional ethnographic studies. By revealing several biases related to transcription and translation of text, the analyses suggested greater consistency in future study to mitigate any unreliability this may introduce. In terms of affect, there was a clear relationship between different sources of data; dis-adopters of technology, or those who rejected use, were shown to be angrier relative to the rest of the sample, whereas new adopters expressed greater joy and happiness. While this confirmed our expectations there were also unusual insights, for example, female farmers were less fearful in the adoption of technologies. It is expected that in future this research may contribute to better targeted interventions, making technologies available to those who are more likely to make use of them.

Moving forward, I continue to work with my industry partner in smaller projects and look forward to collaborating with them in a professional capacity. This experience has been a great help in my PhD for focusing the direction of my research, highlighting the role of data in shaping how knowledge is created and how that plays into agricultural development. It has helped me to manage tasks and to allocate time wisely, and to produce industry standard work to provide benefit to farmers. The final version of the work is undergoing peer review and I hope to see it published in the near future.

If anyone would like to learn more about this work or would like to contact anyone at CIMMYT, please do not hesitate to contact me at eliot.jones@nottingham.ac.uk.

Many thanks for reading!

Eliot

Trusting Machines? Cross-sector lessons from healthcare and security

Royal United Services Institute (RUSI): Trusting Machines? Cross-sector lessons from Healthcare and Security, 30 June – 2 July 2021

post by Kathryn Baguley (2020 cohort)

Overview of event

The event ran 14 sessions over three days which meant that the time passed quickly.  The variety was incredible, with presenters from varied multidisciplinary backgrounds and many different ways of presenting.  It was great to see so many professionals with contrasting opinions getting together and challenging each other and the audience.

Why this event?

My interest in this conference stemmed from the involvement of my industry partner, the Trustworthy Autonomous Systems hub (TAS), and wanting to hear the speaker contributions by academics at the University of Nottingham and the Horizon CDT.  The conference focus on security and healthcare sectors was outside my usual work, so I thought the sessions would provide new things for me to consider. I was particularly interested in gaining insights to help me decide on some case studies and get some ideas on incorporating the ‘trust’ element into my research.

Learnings from sessions

Owing to the number of sessions, I have grouped my learnings by category:

The dramatic and thought-provoking use of the arts

I had never considered the possibilities and effects of using the arts as a lens for AI, even as a long-standing amateur musician. This is a point I will carry forward, maybe not so much for my PhD but training and embedding in my consultancy work.

The work of the TAS hub

It was great to learn more about my industry partner, particularly its interest in health and security.  I can now build this into my thoughts on choosing two further case studies for my research.  Reflecting on the conference, I am making enquiries with the NHS AI Lab Virtual Hub to see whether there are relevant case studies for my research.

Looking at the possible interactions of the human and the machine

I believe overall, in a good way, I came away from the event with more questions for me to ponder, such as:  ‘If the human and the machine were able to confuse each other with their identity, how should we manage and consider the possible consequences of this?’ My takeaway was that trust is a two-way street between the human and the machine.

Aspects of trust

I’d never considered how humans already trust animals and how this works, so the Guide Dogs talk was entirely different for me to think about; the power the dogs have and how the person has to trust the dog for the relationship to work.  Also, the session of Dr Freedman where he discussed equating trust to a bank account brought the concept alive too.   Ensuring that the bank account does not go into the ‘red’ is vital since red signifies a violation of trust, and recovery is difficult. Positive experiences reinforce trust, and thus there is a need to keep this topped up.

The area of trust also left me with a lot of questions which I need to think about how they will feature in my research, such as ‘Can you trust a person?’, ‘Do we trust people more than machines?’ and ‘Do we exaggerate our abilities and those of our fellow humans?’  The example of not telling the difference between a picture and a deepfake but thinking we can is undoubtedly something for us to ponder.  As the previous example shows, there is a fallacy that a human is more trustworthy in some cases. Also, Prof Denis Noble suggested that we have judges because we don’t trust ourselves.

I have reflected on the importance of being able to define trust and trustworthiness.  Doctor Jonathan Ives described trust as ‘to believe as expected’, whereas trustworthiness is ‘to have a good reason to trust based on past events’.  The example he gave of theory and principle helps show this point in that the principle of gravity and the apple falling from the tree; however, we cannot view AI in the same way.

article on trust and trustworthiness

The discussion around trust being an emotion was fascinating because as a lawyer, it made me question how we could even begin to regulate this.  I also wondered how this fit in with emotional AI and the current regulation we have.  I believe that there may be a place for this in my research.

The global context of AI

This area considered whether there is an arms race here, and it was interesting to ponder whether any past technology has ever had the same disruptive capacity.

The value of data in healthcare

There were so many genuinely great examples showing how NHS data can help people in many situations, from imaging solutions to cancer treatment.  I also found the Data Lens part very interesting, enabling a search function for databases within health and social care to find data for research purposes.  The ability to undertake research to help medical prevention and treatment is excellent.  I also found it interesting that the NHS use the database to reduce professional indemnity claims. I wondered about the parameters in place to ensure the use of this data for good.

slide on Brainomix information

The development of frameworks

The NHSX is working with the Ada Lovelace Foundation to create an AI risk assessment like a DPIA. NHS looking to have a joined-up approach between regulators and have mapped the stages of this.  I am looking for the mapping exercise and may request it if I’m unable to locate it.  I was also encouraged to hear how many organisations benefit from public engagement and expect this from their innovators.

slide on AI ethics - A responsible approach

Overall learnings from the event

  • Healthcare could derive a possible case study for my research
  • I have more considerations to think about how to build trust into my research
  • Regulation done in the right way can be a driving force for innovation
  • Don’t assume that your technology is answering your problem
  • It’s ok to have questions without answers
  • Debating the problems can lead to interesting, friendly challenges and new ideas
  • Massive learning point: understand the problem

Royal United Services Institute (RUSI): Trusting Machines?

Royal United Services Institute (RUSI): Trusting Machines? Cross-sector lessons from Healthcare and Security, 30 June – 2 July 2021

post by Kathryn Baguley (2020 cohort)

Overview of event

The event ran 14 sessions over three days which meant that the time passed quickly.  The variety was incredible, with presenters from varied multidisciplinary backgrounds and many different ways of presenting.  It was great to see so many professionals with contrasting opinions getting together and challenging each other and the audience.

Why this event?

My interest in this conference stemmed from the involvement of my industry partner, the Trustworthy Autonomous Systems hub (TAS), and wanting to hear the speaker contributions by academics at the University of Nottingham and the Horizon CDT.  The conference focus on security and healthcare sectors was outside my usual work, so I thought the sessions would provide new things for me to consider. I was particularly interested in gaining insights to help me decide on some case studies and get some ideas on incorporating the ‘trust’ element into my research.

Learnings from sessions

Owing to the number of sessions, I have grouped my learnings by category:

The dramatic and thought-provoking use of the arts

I had never considered the possibilities and effects of using the arts as a lens for AI, even as a long-standing amateur musician. This is a point I will carry forward, maybe not so much for my PhD but training and embedding in my consultancy work.

The work of the TAS hub

It was great to learn more about my industry partner, particularly its interest in health and security.  I can now build this into my thoughts on choosing two further case studies for my research.  Reflecting on the conference, I am making enquiries with the NHS AI Lab Virtual Hub to see whether there are relevant case studies for my research.

Looking at the possible interactions of the human and the machine

I believe overall, in a good way, I came away from the event with more questions for me to ponder, such as:  ‘If the human and the machine were able to confuse each other with their identity, how should we manage and consider the possible consequences of this?’ My takeaway was that trust is a two-way street between the human and the machine.

Aspects of trust

I’d never considered how humans already trust animals and how this works, so the Guide Dogs talk was entirely different for me to think about; the power the dogs have and how the person has to trust the dog for the relationship to work.  Also, the session of Dr Freedman where he discussed equating trust to a bank account brought the concept alive too.   Ensuring that the bank account does not go into the ‘red’ is vital since red signifies a violation of trust, and recovery is difficult. Positive experiences reinforce trust, and thus there is a need to keep this topped up.

The area of trust also left me with a lot of questions which I need to think about how they will feature in my research, such as ‘Can you trust a person?’, ‘Do we trust people more than machines?’ and ‘Do we exaggerate our abilities and those of our fellow humans?’  The example of not telling the difference between a picture and a deepfake but thinking we can is undoubtedly something for us to ponder.  As the previous example shows, there is a fallacy that a human is more trustworthy in some cases. Also, Prof Denis Noble suggested that we have judges because we don’t trust ourselves.

I have reflected on the importance of being able to define trust and trustworthiness.  Doctor Jonathan Ives described trust as ‘to believe as expected’, whereas trustworthiness is ‘to have a good reason to trust based on past events’.  The example he gave of theory and principle helps show this point in that the principle of gravity and the apple falling from the tree; however, we cannot view AI in the same way.

The discussion around trust being an emotion was fascinating, because as a lawyer, it made me question how we could even begin to regulate this.  I also wondered how this fit in with emotional AI and the current regulation we have.  I believe that there may be a place for this in my research.

The global context of AI

This area considered whether there is an arms race here, and it was interesting to ponder whether any past technology has ever had the same disruptive capacity.

The value of data in healthcare

There were so many genuinely great examples showing how NHS data can help people in many situations, from imaging solutions to cancer treatment.  I also found the Data Lens part very interesting, enabling a search function for databases within health and social care to find data for research purposes.  The ability to undertake research to help medical prevention and treatment is excellent.  I also found it interesting that the NHS use the database to reduce professional indemnity claims. I wondered about the parameters in place to ensure the use of this data for good.

The development of frameworks

The NHSX is working with the Ada Lovelace Foundation to create an AI risk assessment like a DPIA. NHS looking to have a joined-up approach between regulators and have mapped the stages of this.  I am looking for the mapping exercise and may request it if I’m unable to locate it.  I was also encouraged to hear how many organisations benefit from public engagement and expect this from their innovators.

Overall learnings from the event

    • Healthcare could derive a possible case study for my research
    • I have more considerations to think about how to build trust into my research
    • Regulation done in the right way can be a driving force for innovation
    • Don’t assume that your technology is answering your problem
    • It’s ok to have questions without answers
    • Debating the problems can lead to interesting, friendly challenges and new ideas
    • Massive learning point: understand the problem

Three Actually Isn’t a Crowd: Reflections on my Internship with the UKRI TAS Hub

post by Cecily Pepper (2019 cohort)

About the Internship

In December 2021, I began an internship with the UKRI TAS (Trustworthy Autonomous Systems) Hub on two projects: TAS for Health and TAS Test and Trace. I applied to this internship as I have an interest in technology and autonomous systems, how people develop and measure trust towards technology, and, perhaps most of all, the effect of technology on health and wellbeing. My own PhD project is about exploring the impact of social media platforms on the mental health and wellbeing of care-experienced young people, so hopefully the connection and interest between technology, health, and wellbeing is clear. Another key motivator in applying for the TAS internship was that the role involved completing an in-depth thematic analysis for each project. As I am doing multiple thematic analyses for my own research, I thought this would be a great opportunity to enhance and develop my skills in this area. Moreover, the internship was a role that included working with two other interns to complete the analyses. This especially interested me as I enjoy working with others and was curious as to how the group analyses would work compared to the analyses I have done by myself.

The TAS for Health project explored the attitudes towards using a smart mirror for health from two participant groups: those who were recovering from a stroke and those who have the medical condition multiple sclerosis (MS). My role involved taking part in the participant workshops, preparing the transcripts from the workshops, and completing the thematic analysis with the other interns. It was a truly enjoyable experience listening to the experiences of the individuals who participated in the research. As well as this, assisting in the workshops offered another layer of insight to the analysis. The other project, TAS Test and Trace, followed on from previous studies around the general public’s attitudes and trust in the NHS Test and Trace application for Covid-19. For this project, my role was to again prepare the transcripts and complete a thematic analysis with the other interns. I will also contribute to the paper writing for both projects, which are currently underway. This is another aspect of the internship that applies to my own PhD activities, as it is a great opportunity to develop my paper writing skills ready for when I aim to publish my own studies.

Reflections on the Internship Experience

I have thoroughly enjoyed the internship and learned so much. I have developed my thematic analysis skills, my collaborative skills, and learned more about how to present and display research findings (to name a few). For this blog, I will now share some reflections on the experiences I have had now the two projects have ended. Perhaps the biggest and most salient reflection I have from working with two TAS research groups was the value of working with others and being part of a research team. I currently don’t belong in a research group and, as PhD students may know, the doctorate journey can be a lonely one. I find many times in my own work that I wish I could bounce my ideas off someone, especially during analysis work. Don’t get me wrong, I have wonderful supervisors who I meet with regularly and they offer brilliant and insightful advice; but doing an analysis collaboratively with two other people was a fantastic experience. We were able to continuously bounce our ideas off one another, question our reasoning together, and have regular chats about anything we queried. This made me appreciate teamwork in a completely new light and it’s safe to say three was definitely not a crowd! We also all learnt a new way of completing a thematic analysis, explained by a member of the research group, which was a valued addition to our current skillset.

In addition to this, having weekly meetings with a research group was a great experience and made me really feel like I was part of a team. As I mentioned, I don’t belong to any research groups so I haven’t had this experience yet; apart from perhaps my CDT cohort, who I love spending time with, but our research is all so different so it can be hard to find commonalities to discuss, and get-togethers are rare now we’re in the later stages of our PhD and due to the pandemic. On a personal note, I’m a shy introvert and tend to anxiously avoid social events, but being required to work with others so regularly and closely was valuable in pushing me outside of my comfort zone. So, one of the biggest take-aways from my internship is the value of working closely with others and having a team of researchers to meet with regularly. I’ve met some lovely people who I hope to work with again in the future and I have learned so many things about the importance of working with a research team.

The second reflection I had from my internship experience was the realisation and surprise at how welcome a distraction from my own PhD work was. I was in a bit of a rut with my PhD work, struggling with recruiting a hard-to-reach participant group whilst also struggling with the final stages of a separate thematic analysis for another of my studies. With the internship, I soon realised that I welcomed the work with open arms and found my research spark again, that had obviously been dimmed by my own research struggles. While I knew subconsciously that I was struggling with my own work, the internship reignited my love for research and made me realise I needed to remove my head from the imaginary sand and address the issues I had in my own work. This was therefore helpful to me personally as it was a reminder that I had a passion for research, and it was encouragement to resolve the issues I had with my own work rather than burying and avoiding them.

Overall, I had a great time and learnt a lot from both the academic side but also from the personal side. Working on two thematic analyses was fantastic practice and I believe my skills have developed significantly. Additionally, being part of a research team with regular meetings and deadlines was useful for the future, both in an industry and academic sense. Despite the value of both the professional and research skills I have developed, the most enjoyable part of the internship was working closely with other researchers and having a welcomed break from the lonely world of doing a PhD during a global pandemic.

 

The joy of building things. My reflection on the internship at BlueSkeye AI

post by Keerthy Kusumam ( 2017 cohort)

September 2020 – January 2021

I interned at BlueSkeye AI, a company that delivers ethical AI for supporting mental health for the vulnerable population using facial and voice behaviour
analysis. The long term vision of BlueSkeye AI is to ’Create AI you can trust for
a better future, together.’ The goals of my PhD aligns perfectly well with that
of BlueSkeye, where comprehending various facial behaviours to recognise markers of mood disorders forms a core part of the work. The company BlueSkeye AI is cofounded by my PhD supervisor Prof Michel Valstar and the teammates include several of my past PhD colleagues. The following pointers are my reflections on my four-month-long internship at BlueSkeye AI.

The joy of building things that work. The internship at BlueSkeye
rekindled my enthusiasm to build systems that work in the real world, face real
challenges, and create real impact. When I joined, BlueSkeye AI had a product
that was going to be released to the market and what I had to build would
then be integrated into this product. That made it extremely well-defined as a
problem, where we were not trying to define a problem itself but rather engineer a solution that needs working on real-world data, leveraging the cutting-edge computer vision/machine learning research.

Real World Vs Research World. My emphasis on real-world data stems
from my divided self where I am both a computer vision researcher as well as a
roboticist. Before doing my PhD I spent nearly 4 years in a robotics research lab with an active collaboration culture – where everyone in an open-plan workspace contributes to projects irrespective of their original funding sources. This cultivated the exchange of ideas across disciplines – computer vision, cybernetics, robotics, reasoning, machine learning etc leading to very creative and interesting bodies of work. In robotics, computer vision is often a tool that it relies upon to make decisions, which means robustness and consistency precedes accuracy. In computer vision research, however, beating the state-of-the-art on benchmark datasets seems to be the key marker of success. I enjoy both these aspects and the internship opportunity at BlueSkeye AI gave me just that – a place to bring those together. I got to build a computer vision-based social gaze estimation system that works on a smartphone. The challenge was about finding the right balance between exploration and exploitation. Here I had to optimize for efficiency, usability, practicality, simplicity and data efficiency along with the standard performance metrics that I use in research.

The Team and Teamwork. My onboarding was seamless, owing to the
hands-on approach adopted by the BlueSkeye AI’s leadership. I was also familiar with the team, so I was lucky to enjoy an incredibly friendly and supportive environment. The weekly meetings where everyone discussed progress or the issues they faced, posed as learning sessions for me. I understood the value of communication and brainstorming from the team as a whole, to keep up the momentum. I worked in sync with the lead machine learning engineer who set up several documents and code specifically for me, that removed my roadblocks to integrate the module into a mobile device. I also learned how managing tasks in a time-critical manner helps save time and resources for the company as well as yourself.

Importance of values. One should never compromise on their values
while working for a company and it is important to work in a place where value
systems align. BlueSkeye AI’s five-year mission is: ’To create the most-used
technology for ethical machine understanding of face and voice behaviour that enables citizens to be seen, heard, and understood.’ I was astonished by their sensitivity towards mental health research, strict adherence to ethical guidelines while handling data, being transparent to the data volunteers about their data and having numerous clinicians with great expertise on board. Being part of the company albeit during a short internship provided me with a sense of purpose and I felt attuned to my values.

One Giant Leap for My Future: Summer Internship Experience with NASA GeneLab

post by Henry Cope (2019 cohort)

Over the summer I had the honour of taking part in the NASA GeneLab summer internship programme. Despite previous plans to complete this in sunny California, the pandemic made it necessary to adapt the internship format, which I must admit was bittersweet. Nevertheless, I was incredibly excited to step into my role as a space biology bioinformatics intern.

Now, I appreciate right off the bat that this might raise a few questions, so I will endeavour to briefly break down the relevant terms as follows:

      • Space biology – This is the study of the adaptation of terrestrial organisms (e.g., you and I) to the extreme environment of space. Two of the main spaceflight stressors are increased radiation exposure and microgravity (0G). The knowledge generated from space biology is important for developing improved countermeasures, such as to reduce microgravity-driven muscle loss experienced by astronauts, which also occurs on Earth due to factors including muscle wasting diseases, or bed rest following surgery. If you are interested in learning about space biology in more detail, I can recommend this open-access review; it’s a very exciting time right now for spaceflight!
      • Omics – These are types of biological “big data” (usually ending in “-omics”, go figure) that tell us about the underlying functioning of different systems within the body. Of course, a classic example is genomics, in which your unique DNA sequence imparts traits such as eye colour. However, there is also transcriptomics, which capture snapshots of how activated/expressed your genes are at given points in time.
      • Bioinformatics – This is essentially analysing biological data, including omics, via software. When a sample of biological material is taken, it can be processed in the lab for different kinds of omics analyses and then computational methods are used to identify meaningful patterns in the data. Lots of programming! 🙂
      • NASA GeneLab – NASA GeneLab is an organisation that consists of two primary components. One is the data side, which is delivered via a carefully curated public biobank of omics collected from spaceflight missions (usually involving model organisms like mice), or from studies on Earth that simulate aspects of spaceflight. The second side of GeneLab is the people side, which is mainly delivered via international analysis working groups (AWGs) that work together to analyse the data within the repository. Spaceflight experiments are costly, so GeneLab’s open-science approach of increasing access to data and collaboration during analysis is important for maximising the scientific potential of these experiments.

With the definitions out of the way, I will briefly describe my primary project for the internship. Essentially, I was presented with several transcriptomics datasets that had been generated from RNA extracted from the skin of mice. These datasets were derived from mice that had been flown on different missions, with lots of other variables such as differences in diet and duration spent on the International Space Station (ISS). Skin is particularly interesting in the context of space biology for several reasons as follows:

      • In spaceflight, dermatological issues such as rashes are very common
      • Skin is the first line of defence against cosmic radiation and an important barrier against pathogens
      • Skin can be monitored using non-invasive methods like swabs, which avoids risks associated with invasive biopsies
      • Skin can act as a “mirror”, telling us about the underlying health of the body in terms of things like immune function and diet
      • Despite the aforementioned importance of skin, skin is incredibly understudied in space!

I had carried out some initial analysis of the datasets prior to the start of the internship, under the guidance of Craig Willis, who was at the time a PhD student at the University of Exeter and is now a researcher at Ohio University! Whilst I had prior experience with programming, bioinformatics was new to me. Craig very kindly showed me the ropes so that I would have the necessary skills to jump straight into the internship project. That said, GeneLab runs programmes for teaching bioinformatics to students at different levels, so having prior bioinformatics skills was not at all a requirement.

Just before I started the internship, I met Afshin Beheshti, who is a bioinformatician and principal investigator at KBR/NASA Ames Research Center, amongst other roles! Afshin was incredibly friendly so we got on right away. Throughout the internship we met weekly via video call, but we also communicated via Slack throughout the week. I strongly believe that a line of communication which is more direct than email is essential for virtual internships. During the internship, GeneLab also organised online networking events, which gave me the opportunity to talk to the other interns about their projects.

Following my internship, I have continued to work on the skin analysis project, and we are now striving towards a publication, which will include astronaut data (a rarity!) alongside the rodent data. I also had the honour of presenting some of our findings online at the public NASA GeneLab AWG workshop in November, and in-person at the Royal Aeronautical Society Aerospace Medicine Group Annual Symposium in London in December. As part of the continued work on the project, I have also been able to engage with the GeneLab for High School (GL4HS) programme. Several students who have previously completed a high school level internship with GeneLab are now working on tasks such as literature review and figure generation for the publication. An additional output is that some of the semi-automatic figures that I have developed for this project have been adapted to different datasets for use in publications for the Covid-19 International Research Team (COV-IRT), of which Afshin is president.

Ultimately, I am very happy to have completed an internship with GeneLab. I’ve developed some great relationships along the way, which have continued past the scope of the internship. In particular, I’d like to thank Sam Gebre for organising the internship, Afshin Beheshti for being an excellent supervisor, and Sigrid Reinsch, Jennifer Claudio, Liz Blaber and the students involved in the GL4HS programme. If you wish to know more about my project or have questions about space biology in general, please feel free to reach me at: henry.cope@nottingham.ac.uk

-Henry

 

 

 

My Placement at the BBC

post by Joanne Parkes (2020 cohort)

My Business Partner is BBC Research and Development. Early in 2020, I came across an advert for a funded ICASE PhD Studentship seeking individuals interested in Enhancing the Digital Media User Experiences using 14 Human Values. It was proposed that these values or psychological drivers which underpin our behaviour could be utilised in some way to shape future offerings and assess their impact rather than relying on more traditional performance measures such as clicks and view time. The studentship would be dedicated to investigating ways to delivering this.

Whilst my first degree is in Media Production and I initially worked in Radio, my career ended up being in Business Psychology where I have specialised in employee selection, assessment and engagement. This research area sits nicely at the Venn intersect of my interests so it was not a difficult decision to apply, especially as I was excited at the prospect of having time allocated to working with the BBC again by way of the placement.

Early on in the studentship however, I sat in on a presentation given by a couple of members of an earlier cohort on their placements. It didn’t seem that their skills and abilities were well used and I was left with an impression of it being not much more than a work experience. I’d resigned myself to the fact that I just need to get it over and done with and frankly, I hadn’t expected to get much more out of it than 20 credits and if I were lucky, perhaps some useful connections to leverage when it came to conducting my research.

I’m pleased to say that this has not been the case. It soon became apparent that my experience was going to be very different to what I’d heard about from 2 students whose experiences were perhaps the unfortunate exception to the rule. My industry supervisors engaged with me right from the start, setting up regular meetings which alternated between discussing their work and my studies. They sought my input on various projects which entailed everything from peer review to internal presentation of data analysis and made me feel valued for my contributions.

It probably helped that I started the studentship when my industry supervisors were part way through creating a Human Values inventory questionnaire which could be used to support several objectives such as helping to design values led ideation workshops through to assessing deliverables in terms of facilitating achievements of values aligned aspirations. My working history imbued me with directly relevant and transferable skills, giving me the confidence to review the work in progress and proffer constructive feedback which was granted more than lip-service consideration. This marked an unofficial start to working towards my placement.

Soon after, a section of the inventory was being tailored to measure alignment to some of the values in a workplace setting, specifically around ‘Belonging to a Group’ and ‘Receiving Recognition’. I could draw parallels with my previous work in the field of Employee Engagement, on measuring attitudes towards Equity, Diversity and Inclusion and again, I was encouraged to provide input towards survey items. Where this differed however was the intention behind the tool to engender self-reflection at time of answering, perhaps to prompt discussion with immediate teams and line-managers where values were not being met rather than analysis of responses at an aggregate level as is more typical in Employee Engagement surveys.

Early in the second semester, the placement was formally kicked off and I was able to be more involved in several short studies:

Study 1 (n: 153) sought attitudes from participants on BBC iPlayer’s capacity to fulfil their values using a 5-point Likert scale and used open questions to seek examples of programmes in each of the values areas (although platforms were often suggested as well, e.g. YouTube being posited for ‘Growing Myself’). This provided some insight into the values considered less well served plus an indication of group score differences relating to gender. There were some clear winners among programmes with ‘Blue Planet’ being listed most often by far for facilitating ‘Explore the World’.

Study 2 (n: 1,147) was a very short follow up survey where participants were provided with 20 programmes to rate in terms of their ability to facilitate each of the values. I helped to select the programmes based on a combination of recent ratings and output of Study 1 in an attempt to present a range of popular genres to increase the likelihood of the participant watching them. I was given autonomy over the analysis and presented our findings across the 2 studies to an internal R&D monthly sectional meeting which included the Head of Applied Research.

Study 3 (n: 15) took a big step from my comfort zone in the form of a in-depth interview on attitudes towards personalisation online. This is the first study I have been involved in where we are actively seeking to publish the findings – wish us luck!

Along the way, I have also volunteered to take part in studies conducted in other areas of BBC R&D. One (Orchestra Surround Sound) involved calibrating multiple devices (e.g. computer, phone, tablet) to create a more interactive and engaging experience with an orchestra which enabled participants to experiment with sound placement and volume. Another involved evaluating room acoustic representation in binaural spatial audio (Polymersive Reverb). Another sound specific study (Soft Clipper listening tests) related to software designed to address sound distortion as part of the BBC’s upgrade of FM transmitters. I also participated in a critical evaluation of R1 Relax and attended a workshop discussing benefits and ethical implications of applying AI&ML to thumbnail selection for programme representation. The latter was really helpful for picking up pointers on focus group facilitation (particularly in a remote setting).

As a panel member in a facilitated discussion group run by the BBC R&D Diversity & Inclusion working group which is driving a range of initiatives to identify and address challenges in this space, I shared what I considered to be challenges and benefits of my neuro divergence in the workplace. On another occasion, I participated in Hybrid Meetings User Testing, providing feedback on any issues or potential policies from my disability perspective. As much as this was another opportunity to network, it provided me a platform through which to advocate.

The work I have participated in, both that allocated by my industry supervisors and that which I have volunteered for, has been beneficial in a number of ways. Among other things, it has:

      • given me an idea of the scope of experimentation that the BBC has the resource to conduct,
      • helped me learn and practice research skills I am less comfortable with, particularly around qualitative rather than quantitative approaches,
      • given me some ideas as to how I might conduct some of my own research in the future,
      • provided me with some findings directly relevant to my research question,
      • cemented a strong working relationship with my industry supervisors to the extent that I feel a part of (and meaningful contributor to) the wider team rather than an adjunct.

A challenge with such a large organisation with many initiatives vying for attention seems to be that it can sometimes be hard to get traction for a new idea or initiative. However, many staff are still working remotely so I am not in a position to conclude if it is the nature of the organisation or a product of working in isolation that on occasion, it has seemed that proposing a cross team study has initially been akin to pushing against an opening door only to end up in a room called limbo.

On the other side, a benefit of the scale of my industry partner is reach when it comes to recruiting study participants. I was really impressed with the speed with which we were able to reach target completion rates on several occasions. This said, I have learned not to take this for granted as I have been aware of situations where colleagues have found recruitment much more arduous, so consideration still needs to be given to sampling, targeting and study appeal.

It seems almost impossible to reflect on anything that has occurred in the last couple of years without making reference to COVID19 related impacts. In my case, it has meant that there has been far less face-to-face time, but I don’t believe that this has been too detrimental as, prior to commencing the studentship, I had worked mainly from home for some years. This said, I was finally able to visit the offices towards the end of last year and a tour of the facilities has revealed options for study approaches that whilst potentially beneficial, now seem perhaps indulgent because in my mindset, the location is so novel to me.

When the placement started (and when it will end) has a very blurred timeline, it certainly hasn’t consisted of a discreet 3 month full or 6 month part time block which comes with the potential to over-commit but also comes with the potential to forget a genuine partnership throughout the course of study.

To conclude, I have gained a lot more from the placement than I had initially expected. I’ve had facetime with people well placed to support my research in the future and importantly to me, I’ve felt that I have made meaningful contributions throughout rather than (virtually) turning up in order to tick a box.