Call for Participants: Identifying synthetic aerial images

post by Matthew Yates (2018 cohort)

Hello,

I am University of Nottingham 3rd year PhD student partnered with the Dstl. My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.

For this study I am looking for participants to take part in my ongoing online study on identifying synthetic aerial images. We have used Generative Adversarial Networks (GANs) to create these.

I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (Satellite aerial images) or GAN-generated images.

This is study 2 in larger PhD project looking at the generation and detection of GAN synthesised earth observation data.

For more information on the project and studies please visit https://aiaerialimagery.wordpress.com/

 

Purpose: To assess the difficulty in the task of distinguishing GAN generated fake images from real satellite photos of rural and urban environments.  This is part of a larger PhD project

Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.

Commitment: The study consists of a short survey (2- 5 minutes) then a longer detection task (10-20 mins but can be completed in own time) hosted on Zooniverse.org. 

This study involves identifying the synthetic image out of a set of image pairs then marking the parts of the image that informed your decision.

How to participate? Read through the information on the project site and proceed to the link for Study 2

Project URL: https://aiaerialimagery.wordpress.com/   (See Study 2)

Study URL: https://formfaca.de/sm/_kBsk76eo

About the Zooniverse platform: https://www.zooniverse.org/lab

For any additional information or queries please feel free to contact me:
+44 (0) 747 386 1599     matthew.yates1@nottingham.ac.uk

Thanks for your time.

Matthew Yates

 

Internship as UX/UI designer

post by Serena Midha (2017 cohort)

As someone whose journey so far has been straight through education, from school to BSc to MSc and PhD, exposure to life outside of the education bubble has been fairly limited. So, for the internship, I was keen to work in industry!

With the arrival of the pandemic in my third year, there was a fair amount of concern that the opportunity for an internship was sparse. Around the time when I was starting to mildly panic, there was an advertisement for a virtual internship as a UX/UI Designer. The company was a start-up called Footfalls and Heartbeats and they had developed a technology that meant that knitted yarns could act as physiological sensors. The internship was focussed on one product which was a knee sleeve designed to provide physiological feedback to physiotherapists and athletes during training. The product was still under development, but the prototype looked just like a soft knee brace which weightlifters wear and the data it could measure included the range of motion of the knee and a squat repetition counter; the product had potential to measure velocity but that was an aim for further in the future.

The description seemed tailored to my idea of an ideal internship! It was related to my PhD as my research involves investigating effective ways of conveying brain data to users, and the internship project investigated ways of conveying the physiological data from the knee sleeve to users. The description of the project also suited my interests in sport (and weirdly knees and sewing). I applied and was lucky enough to be accepted. The application process had a few stages. The first stage was the submission of a CV and personal statement. After that, I got asked to do a practical task which involved a UX task of evaluating where I would input a certain aspect of the knee sleeve connection within the app, and a UI task of making high fidelity wireframes on Figma (a design software) based on low fidelity wireframes that were provided. The task had a 5-day deadline and I had no UI experience. To be honest, I had never heard of Figma (or high fidelity wireframes or basically anything to do with UI), so I basically spent all 5 days watching YouTube videos and doing a lot of learning! An interview with a director and data scientist/interface designer followed the practical task and they liked my design (somehow I forgot to tell them that I had only just learned what Figma was)!

There were two of us doing the internship; I was supposed to be designing the desktop app and the other person was to design the mobile and tablet app. We were supervised by the data scientist who interviewed me and he was a talented designer which meant he often took on design roles in the company. He wanted to create an office-like atmosphere even though we were working remotely so the three of us remained on a voice call all day (muted) and piped up when we wanted to discuss anything.

With the product still very much under development and its direction ever-changing, our project changed during every weekly team meeting for the first 4 or 5 weeks. I think this was because the company wasn’t really sure where the product was going and thus they would ask us to do something, like display a certain type of data, only for us to find out the next week that the product couldn’t measure that type of data. The product was supposed to be a business to consumer product and thus we started designing a detailed app fit for end users, but the company’s crowdfunding was unsuccessful so they changed direction to create a business to business product. This meant that our project changed to designing a tablet demo app which showcased what the product could do. They definitely didn’t need two internship people for this project but we made it work!

The most stand-out thing to me about the whole internship was the lack of market research within the team – I don’t think there was any! The product was designed for professional athletes and physiotherapists, yet I really couldn’t see how the two main sources of data it could measure would be useful for either party. I was pretty sure athletes wouldn’t want an app to count their reps when they could do it in their heads and I was pretty sure that physios were happy measuring range of motion with a plastic goniometer (and patients with swollen knees wouldn’t be able to fit on the knee sleeve). I raised these points and the company asked me to speak to my personal physio and his feedback was that he would have no use for the knee sleeve; however, the company decided to carry on with these functions as the main focus of the knee sleeve measurements and I think this was because measuring this data was most achievable in the short term. The whole thing was proper baffling!

However, by the end of the internship we had produced a really nice demo app. I had learned a lot about design in terms of how to design a whole app! We generally started with sketches of designs which then were digitised into low fidelity wireframes and then developed into the high fidelity end version. I also learned about some really helpful tools that designers use such as font identifies and colour pallet finders. We produced a design document which communicated in detail our designs to the engineers who were going to make the app. And I had a very valuable insight into a start-up company which was chaotic yet friendly.

My supervisor on the project was great to work with. He made sure we got the most out of the internship and had fun whilst doing it, and he created a very safe space between the three of us. The company had a very inclusive and supportive atmosphere and they made us feel like part of the team. I think the product has a lot of potential but needs developing further which would mean a later release date. I’m most looking forward to seeing what happens with the knitted technology sensors as they can have many potential applications such as in furniture or shoes.

 

 

Build a Better Primary

post by Dr Rachel Jacobs ( 2009 cohort)

My studio in Nottingham – Primary – is running a large crowd funding campaign to support developing the building and keeping the arts resilient in Nottingham post the pandemic. They are offering an opportunity to receive artworks, art books, postcards and more in return for support for their developments.

Primary is a local artist-led contemporary visual arts organisation, based at the old Douglas Road Primary School in Radford, Nottingham. They run a free public programme of events and exhibitions, and provide studio spaces to over 50 resident artists. They are a vital arts space for the city. They have worked regularly with Horizon and the Mixed Reality Lab through my work and other collaborations with researchers.

If you are interested in supporting them and receiving artworks in return look at the website here: https://www.crowdfunder.co.uk/build-a-better-primary

Please also pass this on to anyone you know who loves art and you think might be interested!

Thank you!
Rachel

 

Call for Participants – Interaction for Children & Young People

Have you been affected by not seeing your family and friends during Covid-19 restrictions?

Horizon CDT PhD student Mel Wilson (2018 cohort) is looking for participants to help with her research into the effects of Covid-19 on children and young people.

You can find out more details on how to participate here.

For any additional information or queries please feel free to contact Mel at  Melanie.Wilson@nottingham.ac.uk.

 

Podcast – Creating an Autistic Space

Jenn Layton Annable (2020 cohort) is researching the intersection between gender, autistic experience, and self-identity.

Jenn joins Hanna Bertilsdotter-Rosqvist on the podcast by AutSpace to discuss how terminology, the choice of words, is essential in the process of creating an autistic space. Another important feature is the unusual internal sensory differences that Jenn experiences.

In the talk, Jenn refers to an article called Sensory Strangers. This is a chapter in the book Neurodiversity Studies: A New Critical Paradigm published by Routledge which Jenn is a co-writer of.

If you are interested in reading the article you can find it here.

 

Attending FAccT 2021

posted by Ana Rita Pena (2019 cohort)

The ACM Conference on Fairness, Accountability and Transparency  (FAccT 2021) is an interdisciplinary conference with an interest in research on “ethical” socio-technical systems. Hosted entirely online the 2021 edition was the 4th edition of the conference, which started with fairly small in 2018 but has received a growing amount of interest in the last couple of editions.

The conference started on the 3rd of March with the Doctoral Colloquium, following with a Tutorial’s day (divided into three tracks: Technology; Philosophy/Law/Society and Practise) and a CRAFT day.

Before the consortium we were asked to prepare an informal presentation on our PhD work to present to the other participants in small groups. Having small breakout groups led to very engaging back and forth discussions on everyone’s work. Following on from that we had the choice of several discussion topics each in a different breakout room, the topics ranged from research interests to career advice to current world events. For the last activity of the consortium, we were divided into similar research interests and each group was allocated a mentor. The discussions we had ranged from understanding how all of the attendees’ research fitted together within a higher ecosystem to discussions on various approaches to incorporating our world/political views within our research. At times when focusing on our work it is easy to lose sight of the higher picture and even critically evaluate our own approach to our work, so being able to have a space to discuss it with a varied group of people working in a similar area was one of the most enriching experiences of the conference.

Another personal highlight of the conference was the CRAFT session “An Equality Opportunity: Combating Disability Discrimination in AI” which was presented by Lydia X. Z. Brown, Hannah Quay-de la Vallee, and Stan Adams (Center for Democracy & Technology). The CRAFT sessions are specifically designed to bring academics of different disciplines together to discuss current open problems. While algorithmic bias and discrimination regarding race and gender are more widely studied, disability bias has been severely understudied, this in part caused by the difficulty to summarise the varied disability spectrum in discrete labels. The session’s discussion was to imagine and think about possible ways to address disability bias, while still giving a voice to people with lived experiences.

After the weekend, there were three full days of paper presentations. Each day there was a panel session with a given topic followed up with the keynote. On day one the panel topic was “Health Inequities, Machine Learning, and the Covid Looking Glass “ followed by an excellent keynote by Yeshimabeit Milner from Data For Black Lives on Health, Technology, and Race (https://www.youtube.com/watch?v=CmaNsbB-bIo for the keynote video). The second day discussion was around the topics of the flaws of mathematical models of causality and fairness approaches. To end the conference on a bit of a more optimistic note the final discussions were possible future directions and the role of journalism and the importance of good journalism to audit algorithms and make them accountable to the public.  The keynote speaker was Julia Angwin who was the first journalist to report on the COMPAS recidivism prediction tool bias. The COMPAS dataset bias was one of the issues that made the topic of algorithmic fairness gain some traction and that is still commonly used in the literature of Fairness in Machine Learning. Julia is currently in charge of The Markup, an independent and not-for-profit newsroom that focuses on data-driven journalism.

The different discussions enabled in the conference gave me some space to look at my own work and critically reflect on what I am doing, why I am doing it and the approach that I am taking, which is a conversation with myself that is still in process. It was not necessarily the very interesting research that was presented,  but the deep discussions that had taken place that made my attendance of FAccT 2021 an enriching experience.

Here are some of my favourite papers of the conference:

Representativeness in Statistics, Politics, and Machine Learning
(https://dl.acm.org/doi/10.1145/3442188.3445872)

Epistemic values in feature importance methods: Lessons from feminist epistemology (https://dl.acm.org/doi/10.1145/3442188.3445943-)

Emoting a wider range of expressions in Teams

Many of us are spending a lot of time in Teams meetings. One challenge of remote working is the reduced ability to express, and pick up, subtle body language and facial cues, which can contribute to difficulty communicating – even before broadband connection comes into play.

Microsoft launched Reactions in Teams in December, which allows us to show a reaction while someone else is talking.

This is great, and people in meetings I’ve been in have found it really helpful. However, there are currently very limited options to emote. We can either like (thumbs up), love (heart), clap or laugh. Or put our hand up.

Great!…but we can’t use it to express different emotions. In particular, all the reactions are positive. This may contribute to pleasant team meetings – but risks contributing to ‘groupthink‘. The ability to convey uncertainty, or dissatisfaction, or frustration, are important social signals often communicated through subtle facial cues, which on a Teams call may be impossible to spot. If I’m not feeling comfortable for some reason in a Teams call, my only options are to speak out verbally, or keep schtumm, or use the comments (which a speaker may not see).

I was recently in an excellent session on Challenging conversations – having a visual way to challenge statements, may add to verbal intervention as a way to signal that something is not OK.

Zoom and Slack have a much wider range. Taking Zoom as an example – more, still quite positive, but with the ability to thumbs down or say ‘No’:

So how we do extend the range of emotional expression in Teams? Microsoft say they’re working on extending the range, but there isn’t a timescale.

Someone has created a technical solution, but it needs to be set up by sysadmins in the organisation (example).

I’ve come across other ways for signalling emotions, including non technical – for example, some teaching staff encourage students to use their Teams/Zoom background, or even their clothes, to signal how they’re feeling (red or amber for different shades of ‘I’ve got some concerns’).

From a discussion at a team meeting I decided to try and solve this problem, using Snap filters. The brief here was to create a filter that allowed a wider range of emotes, presented in the same style as the existing Teams reactions, and in particular to plug the gaps in current reactions around expressing uncertainty or concern.

I present – the Emoji Board! Use the link to access, or scan the following with Snapchat:

Using this with Snap camera allows the following emotes, presented in the same style as Teams reactions (appear on screen for 3 seconds, centred, transparent background)

😂lol/hilarious (emphasis)

🤔hmm/uncertainty

👎 dislike

😱 shock

😭cry

 

 

The filter should be usable on mobile phones but is optimised for use with Teams (or Zoom) on a laptop. To use, it click on the sides of the Snap camera screen to pop out the emotes.

Screen press locations

–originally posted on Vincent’s blog

Addressing the Human Fallibility That Leads to a Data Breach

Article written by Farid Vayani (2020 cohort)
Originally published in the ISACA Journal


Within the last decade or so, cyberincidents have made headlines and have become top strategic risk factors for enterprises. These incidents have not spared even high-profile enterprises and government bodies. Despite significant investments in cyberdefense, these entities are still considered soft targets by attackers. It has become clear that the weakest link in the security chain is the human factor.

Negligence is a key aspect of human fallibility. Employees and contractors fail to heed security training, enterprise policies, and applicable laws and regulations, which may be regarded as mere check-the-box exercises at the time of joining the enterprise. Negligent insiders are responsible for 62 percent of cyberincidents.1

Consequently, cybersecurity management can no longer be treated as something distinct from the business or as merely an IT department issue. Senior leadership must enhance the enterprise’s cybersecurity strategy by ensuring a security risk-aware culture and working with employees, contractors, regulators, peer organizations and third-party suppliers to reduce the risk of cyberincidents. Ownership of cybersecurity risk at the top helps secure the trust and confidence of all stakeholders while setting the appropriate tone.2

Intra- vs. Cross-Organizational Cybersecurity Management

The growing number of insider threats, the expanding regulatory requirements to safeguard personal and sensitive data, the complexity of responding to changing attack vectors, and the pressure created by these circumstances demand a shift in cybersecurity management from intraorganizational to cross-organizational. In cross-organizational cybersecurity management, the sharing of threat intelligence is of paramount importance.3 This includes information that can mitigate insider threats, such as background checks to determine credit rating, employment history and criminal convictions. Intraorganizational cybersecurity management, in contrast, caters to a noncollaborative and independent type of security management,4 which leads to a siloed approach and enables insider threats to materialize and expand effortlessly.

The Human Side of Organizations

The human work motivation and management Theory Y proposes an environment in which leading by example extends respect, dignity and inspiration to employees, encouraging them to become ethical and disciplined in accepting and conforming to the enterprise’s security culture.5 In contrast, Theory X takes a cynical view of human nature and leads to an adversarial relationship between leaders and employees.6 Social learning theory suggests that weak leadership is to blame for an apathetic and uncooperative workforce; thus top management should be held accountable for the security culture, ensuring its acceptance by articulating its core ethical values and principles through verbal expressions and reminders.7

Consider the example of a security audit conducted in a Theory X vs. Theory Y enterprise. In a Theory X enterprise, there is a bureaucratic chain of command. The auditor discovers a problem and reports it to the information security officer. The security officer passes the information on to the department head, who, in turn, informs the team leader of the non-compliance issue. The team leader summons the employee or employees closest to the source of the problem. This creates a confrontational environment because the employees may have been unaware that their activities were being audited.

In a Theory Y enterprise, the auditor collaborates with the relevant employees when setting the objectives of the audit and engages them directly when a problem is discovered, thus enabling them to own and address the problem. The auditor’s report still climbs the official ladder, but by the time it arrives at the top, the employees have already taken the appropriate steps to mitigate the issue. Employees appreciate feedback from the top and recognize that the enterprise is not interested in punishing them. Such an up-front approach creates mutual trust, respect and an improved security culture.

Conclusion and Recommendations

Most enterprise leaders are not experts in cybersecurity management, but such expertise is not required to make effective decisions. Leaders should take the following steps:

    • Train employees properly, and make sure that they are aware of proper procedures. This goes a long way in mitigating cybersecurity risk and improving the enterprise’s security posture.
    • Integrate human resources management processes into the cybersecurity strategy to identify and address any potential insider threats that could lead to data breaches and result in regulatory fines, damage to business reputation and financial losses. The motive is not always financial gain; it could be vengeance on the part of a disgruntled employee or contractor due to a denied promotion, unfair treatment or poor working conditions. Although malicious acts constitute only 23 percent of all incidents, their impact can be far reaching.8
    • Create a security culture that belongs to everyone, articulate security goals and monitor the enterprise’s security posture from the outset. An enterprise’s security culture dictates the behavior of its employees and the enterprise’s success in sustaining an adequate security posture.
    • Ensure that the security culture is inclusive and permeates all parts of the enterprise.
    • Foster transparency, develop trust and enhance communications in both directions (bottom up and top down), which will facilitate collaborative ideas, better coordination and positive results.

Nevertheless, ownership of cybersecurity risk at the top is key to getting the security culture right and fostering the desired security behaviors.

“AN ENTERPRISE’S SECURITY CULTURE DICTATES THE BEHAVIOR OF ITS EMPLOYEES AND THE ENTERPRISE’S SUCCESS IN SUSTAINING AN ADEQUATE SECURITY POSTURE.”

Author’s Note

The views expressed in this article are the author’s views and do not represent those of the organization or the professional bodies with which he is associated.

Endnotes

  1. Ponemon Institute, 2020 Cost of Insider Threats Global Report, USA, 2020, https://www.proof point.com/us/resources/threat-reports/ 2020-cost-of-insider-threats
  2. Bandura, A.; “Social Cognitive Theory: An Agentic Perspective,” Annual Review of Psychology, vol. 52, February 2001, https://www.annualreviews.org/doi/abs/ 10.1146/annurev.psych.52.1.1
  3. Abiteboul, S.; R. Agrawal; P. Bernstein; M. Carey; S. Ceri; B. Croft; D. DeWitt; M. Franklin; H. Garcia Molina; D. Gawlick; et al.; “The Lowell Database Research Self-Assessment,” Communications of the ACM, vol. 48, iss. 5, May 2005, http://dl.acm.org/citation.cfm?doid= 1060710.1060718
  4. Settanni, G.; F. Skopik; Y. Shovgenya; R. Fiedler; M. Carolan; D. Conroy; K. Boettinger; M. Gall; G. Brost; C. Ponchel; M. Haustein; H. Kaufmann; K. Theuerkauf; P. Olli; “A Collaborative Cyber Incident Management System for European Interconnected Critical Infrastructures,” Journal of Information Security and Applications, vol. 34, part 2, June 2017, p. 166–182, https://www.sciencedirect.com/science/article/abs/pii/S2214212616300576
  5. McGregor, D. M.; Human Side of Enterprise, McGraw-Hill, USA, 1957
  6. Ibid.
  7. Op cit Bandura
  8. Op cit Ponemon Institute