Hello everybody, I’m Cecily from the CDT 2019 cohort. As part of my PhD research, I’m exploring how social media affects the mental well-being of young people, with a specific focus on our sense of self. I am recruiting and interested in hearing thoughts from all young people, but I am particularly interested in hearing from looked-after young people. I am also interested in hearing thoughts from social care professionals on this research topic.
For my PhD, I am conducting three studies, two of which are with young people and one with social care professionals. These will be informal, online discussions that explore the effects of social media on our sense of self, existing policies surrounding social media, young people and mental health, and how lockdown may have impacted upon young people’s social media use and mental health. I hope that the results of the studies will offer young people the opportunity to share their voice on this topic and the results may potentially have implications for future policies and social media design.
I am University of Nottingham 3rd year PhD student partnered with the Dstl. My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.
For this study I am looking for participants to take part in my ongoing online study on identifying synthetic aerial images. We have used Generative Adversarial Networks (GANs) to create these.
I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (Satellite aerial images) or GAN-generated images.
This is study 2 in larger PhD project looking at the generation and detection of GAN synthesised earth observation data.
Purpose:To assess the difficulty in the task of distinguishing GAN generated fake images from real satellite photos of rural and urban environments. This is part of a larger PhD project
Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.
Commitment: The study consists of a short survey (2- 5 minutes) then a longer detection task (10-20 mins but can be completed in own time) hosted on Zooniverse.org.
This study involves identifying the synthetic image out of a set of image pairs then marking the parts of the image that informed your decision.
How to participate? Read through the information on the project site and proceed to the link for Study 2
As someone whose journey so far has been straight through education, from school to BSc to MSc and PhD, exposure to life outside of the education bubble has been fairly limited. So, for the internship, I was keen to work in industry!
With the arrival of the pandemic in my third year, there was a fair amount of concern that the opportunity for an internship was sparse. Around the time when I was starting to mildly panic, there was an advertisement for a virtual internship as a UX/UI Designer. The company was a start-up called Footfalls and Heartbeatsand they had developed a technology that meant that knitted yarns could act as physiological sensors. The internship was focussed on one product which was a knee sleeve designed to provide physiological feedback to physiotherapists and athletes during training. The product was still under development, but the prototype looked just like a soft knee brace which weightlifters wear and the data it could measure included the range of motion of the knee and a squat repetition counter; the product had potential to measure velocity but that was an aim for further in the future.
The description seemed tailored to my idea of an ideal internship! It was related to my PhD as my research involves investigating effective ways of conveying brain data to users, and the internship project investigated ways of conveying the physiological data from the knee sleeve to users. The description of the project also suited my interests in sport (and weirdly knees and sewing). I applied and was lucky enough to be accepted. The application process had a few stages. The first stage was the submission of a CV and personal statement. After that, I got asked to do a practical task which involved a UX task of evaluating where I would input a certain aspect of the knee sleeve connection within the app, and a UI task of making high fidelity wireframes on Figma (a design software) based on low fidelity wireframes that were provided. The task had a 5-day deadline and I had no UI experience. To be honest, I had never heard of Figma (or high fidelity wireframes or basically anything to do with UI), so I basically spent all 5 days watching YouTube videos and doing a lot of learning! An interview with a director and data scientist/interface designer followed the practical task and they liked my design (somehow I forgot to tell them that I had only just learned what Figma was)!
There were two of us doing the internship; I was supposed to be designing the desktop app and the other person was to design the mobile and tablet app. We were supervised by the data scientist who interviewed me and he was a talented designer which meant he often took on design roles in the company. He wanted to create an office-like atmosphere even though we were working remotely so the three of us remained on a voice call all day (muted) and piped up when we wanted to discuss anything.
With the product still very much under development and its direction ever-changing, our project changed during every weekly team meeting for the first 4 or 5 weeks. I think this was because the company wasn’t really sure where the product was going and thus they would ask us to do something, like display a certain type of data, only for us to find out the next week that the product couldn’t measure that type of data. The product was supposed to be a business to consumer product and thus we started designing a detailed app fit for end users, but the company’s crowdfunding was unsuccessful so they changed direction to create a business to business product. This meant that our project changed to designing a tablet demo app which showcased what the product could do. They definitely didn’t need two internship people for this project but we made it work!
The most stand-out thing to me about the whole internship was the lack of market research within the team – I don’t think there was any! The product was designed for professional athletes and physiotherapists, yet I really couldn’t see how the two main sources of data it could measure would be useful for either party. I was pretty sure athletes wouldn’t want an app to count their reps when they could do it in their heads and I was pretty sure that physios were happy measuring range of motion with a plastic goniometer (and patients with swollen knees wouldn’t be able to fit on the knee sleeve). I raised these points and the company asked me to speak to my personal physio and his feedback was that he would have no use for the knee sleeve; however, the company decided to carry on with these functions as the main focus of the knee sleeve measurements and I think this was because measuring this data was most achievable in the short term. The whole thing was proper baffling!
However, by the end of the internship we had produced a really nice demo app. I had learned a lot about design in terms of how to design a whole app! We generally started with sketches of designs which then were digitised into low fidelity wireframes and then developed into the high fidelity end version. I also learned about some really helpful tools that designers use such as font identifies and colour pallet finders. We produced a design document which communicated in detail our designs to the engineers who were going to make the app. And I had a very valuable insight into a start-up company which was chaotic yet friendly.
My supervisor on the project was great to work with. He made sure we got the most out of the internship and had fun whilst doing it, and he created a very safe space between the three of us. The company had a very inclusive and supportive atmosphere and they made us feel like part of the team. I think the product has a lot of potential but needs developing further which would mean a later release date. I’m most looking forward to seeing what happens with the knitted technology sensors as they can have many potential applications such as in furniture or shoes.
My studio in Nottingham – Primary – is running a large crowd funding campaign to support developing the building and keeping the arts resilient in Nottingham post the pandemic. They are offering an opportunity to receive artworks, art books, postcards and more in return for support for their developments.
Primary is a local artist-led contemporary visual arts organisation, based at the old Douglas Road Primary School in Radford, Nottingham. They run a free public programme of events and exhibitions, and provide studio spaces to over 50 resident artists. They are a vital arts space for the city. They have worked regularly with Horizon and the Mixed Reality Lab through my work and other collaborations with researchers.
Jenn Layton Annable (2020 cohort) is researching the intersection between gender, autistic experience, and self-identity.
Jenn joins Hanna Bertilsdotter-Rosqvist on the podcast by AutSpace to discuss how terminology, the choice of words, is essential in the process of creating an autistic space. Another important feature is the unusual internal sensory differences that Jenn experiences.
In the talk, Jenn refers to an article called Sensory Strangers. This is a chapter in the book Neurodiversity Studies: A New Critical Paradigm published by Routledge which Jenn is a co-writer of.
If you are interested in reading the article you can find it here.
The ACM Conference on Fairness, Accountability and Transparency (FAccT 2021) is an interdisciplinary conference with an interest in research on “ethical” socio-technical systems. Hosted entirely online the 2021 edition was the 4th edition of the conference, which started with fairly small in 2018 but has received a growing amount of interest in the last couple of editions.
The conference started on the 3rd of March with the Doctoral Colloquium, following with a Tutorial’s day (divided into three tracks: Technology; Philosophy/Law/Society and Practise) and a CRAFT day.
Before the consortium we were asked to prepare an informal presentation on our PhD work to present to the other participants in small groups. Having small breakout groups led to very engaging back and forth discussions on everyone’s work. Following on from that we had the choice of several discussion topics each in a different breakout room, the topics ranged from research interests to career advice to current world events. For the last activity of the consortium, we were divided into similar research interests and each group was allocated a mentor. The discussions we had ranged from understanding how all of the attendees’ research fitted together within a higher ecosystem to discussions on various approaches to incorporating our world/political views within our research. At times when focusing on our work it is easy to lose sight of the higher picture and even critically evaluate our own approach to our work, so being able to have a space to discuss it with a varied group of people working in a similar area was one of the most enriching experiences of the conference.
Another personal highlight of the conference was the CRAFT session “An Equality Opportunity: Combating Disability Discrimination in AI” which was presented by Lydia X. Z. Brown, Hannah Quay-de la Vallee, and Stan Adams (Center for Democracy & Technology). The CRAFT sessions are specifically designed to bring academics of different disciplines together to discuss current open problems. While algorithmic bias and discrimination regarding race and gender are more widely studied, disability bias has been severely understudied, this in part caused by the difficulty to summarise the varied disability spectrum in discrete labels. The session’s discussion was to imagine and think about possible ways to address disability bias, while still giving a voice to people with lived experiences.
After the weekend, there were three full days of paper presentations. Each day there was a panel session with a given topic followed up with the keynote. On day one the panel topic was “Health Inequities, Machine Learning, and the Covid Looking Glass “ followed by an excellent keynote by Yeshimabeit Milner from Data For Black Lives on Health, Technology, and Race (https://www.youtube.com/watch?v=CmaNsbB-bIo for the keynote video). The second day discussion was around the topics of the flaws of mathematical models of causality and fairness approaches. To end the conference on a bit of a more optimistic note the final discussions were possible future directions and the role of journalism and the importance of good journalism to audit algorithms and make them accountable to the public. The keynote speaker was Julia Angwin who was the first journalist to report on the COMPAS recidivism prediction tool bias. The COMPAS dataset bias was one of the issues that made the topic of algorithmic fairness gain some traction and that is still commonly used in the literature of Fairness in Machine Learning. Julia is currently in charge of The Markup, an independent and not-for-profit newsroom that focuses on data-driven journalism.
The different discussions enabled in the conference gave me some space to look at my own work and critically reflect on what I am doing, why I am doing it and the approach that I am taking, which is a conversation with myself that is still in process. It was not necessarily the very interesting research that was presented, but the deep discussions that had taken place that made my attendance of FAccT 2021 an enriching experience.
Here are some of my favourite papers of the conference:
Many of us are spending a lot of time in Teams meetings. One challenge of remote working is the reduced ability to express, and pick up, subtle body language and facial cues, which can contribute to difficulty communicating – even before broadband connection comes into play.
Microsoft launched Reactions in Teams in December, which allows us to show a reaction while someone else is talking.
This is great, and people in meetings I’ve been in have found it really helpful. However, there are currently very limited options to emote. We can either like (thumbs up), love (heart), clap or laugh. Or put our hand up.
Great!…but we can’t use it to express different emotions. In particular, all the reactions are positive. This may contribute to pleasant team meetings – but risks contributing to ‘groupthink‘. The ability to convey uncertainty, or dissatisfaction, or frustration, are important social signals often communicated through subtle facial cues, which on a Teams call may be impossible to spot. If I’m not feeling comfortable for some reason in a Teams call, my only options are to speak out verbally, or keep schtumm, or use the comments (which a speaker may not see).
I was recently in an excellent session on Challenging conversations – having a visual way to challenge statements, may add to verbal intervention as a way to signal that something is not OK.
Zoom and Slack have a much wider range. Taking Zoom as an example – more, still quite positive, but with the ability to thumbs down or say ‘No’:
Someone has created a technical solution, but it needs to be set up by sysadmins in the organisation (example).
I’ve come across other ways for signalling emotions, including non technical – for example, some teaching staff encourage students to use their Teams/Zoom background, or even their clothes, to signal how they’re feeling (red or amber for different shades of ‘I’ve got some concerns’).
From a discussion at a team meeting I decided to try and solve this problem, using Snap filters. The brief here was to create a filter that allowed a wider range of emotes, presented in the same style as the existing Teams reactions, and in particular to plug the gaps in current reactions around expressing uncertainty or concern.
I present – the Emoji Board! Use the link to access, or scan the following with Snapchat:
Using this with Snap camera allows the following emotes, presented in the same style as Teams reactions (appear on screen for 3 seconds, centred, transparent background)