My studio in Nottingham – Primary – is running a large crowd funding campaign to support developing the building and keeping the arts resilient in Nottingham post the pandemic. They are offering an opportunity to receive artworks, art books, postcards and more in return for support for their developments.
Primary is a local artist-led contemporary visual arts organisation, based at the old Douglas Road Primary School in Radford, Nottingham. They run a free public programme of events and exhibitions, and provide studio spaces to over 50 resident artists. They are a vital arts space for the city. They have worked regularly with Horizon and the Mixed Reality Lab through my work and other collaborations with researchers.
Have you been affected by not seeing your family and friends during Covid-19 restrictions?
Horizon CDT PhD student Mel Wilson (2018 cohort) is looking for participants to help with her research into the effects of Covid-19 on children and young people.
You can find out more details on how to participate here.
For any additional information or queries please feel free to contact Mel at Melanie.Wilson@nottingham.ac.uk.
Jenn Layton Annable (2020 cohort) is researching the intersection between gender, autistic experience, and self-identity.
Jenn joins Hanna Bertilsdotter-Rosqvist on the podcast by AutSpace to discuss how terminology, the choice of words, is essential in the process of creating an autistic space. Another important feature is the unusual internal sensory differences that Jenn experiences.
In the talk, Jenn refers to an article called Sensory Strangers. This is a chapter in the book Neurodiversity Studies: A New Critical Paradigm published by Routledge which Jenn is a co-writer of.
If you are interested in reading the article you can find it here.
The ACM Conference on Fairness, Accountability and Transparency (FAccT 2021) is an interdisciplinary conference with an interest in research on “ethical” socio-technical systems. Hosted entirely online the 2021 edition was the 4th edition of the conference, which started with fairly small in 2018 but has received a growing amount of interest in the last couple of editions.
The conference started on the 3rd of March with the Doctoral Colloquium, following with a Tutorial’s day (divided into three tracks: Technology; Philosophy/Law/Society and Practise) and a CRAFT day.
Before the consortium we were asked to prepare an informal presentation on our PhD work to present to the other participants in small groups. Having small breakout groups led to very engaging back and forth discussions on everyone’s work. Following on from that we had the choice of several discussion topics each in a different breakout room, the topics ranged from research interests to career advice to current world events. For the last activity of the consortium, we were divided into similar research interests and each group was allocated a mentor. The discussions we had ranged from understanding how all of the attendees’ research fitted together within a higher ecosystem to discussions on various approaches to incorporating our world/political views within our research. At times when focusing on our work it is easy to lose sight of the higher picture and even critically evaluate our own approach to our work, so being able to have a space to discuss it with a varied group of people working in a similar area was one of the most enriching experiences of the conference.
Another personal highlight of the conference was the CRAFT session “An Equality Opportunity: Combating Disability Discrimination in AI” which was presented by Lydia X. Z. Brown, Hannah Quay-de la Vallee, and Stan Adams (Center for Democracy & Technology). The CRAFT sessions are specifically designed to bring academics of different disciplines together to discuss current open problems. While algorithmic bias and discrimination regarding race and gender are more widely studied, disability bias has been severely understudied, this in part caused by the difficulty to summarise the varied disability spectrum in discrete labels. The session’s discussion was to imagine and think about possible ways to address disability bias, while still giving a voice to people with lived experiences.
After the weekend, there were three full days of paper presentations. Each day there was a panel session with a given topic followed up with the keynote. On day one the panel topic was “Health Inequities, Machine Learning, and the Covid Looking Glass “ followed by an excellent keynote by Yeshimabeit Milner from Data For Black Lives on Health, Technology, and Race (https://www.youtube.com/watch?v=CmaNsbB-bIo for the keynote video). The second day discussion was around the topics of the flaws of mathematical models of causality and fairness approaches. To end the conference on a bit of a more optimistic note the final discussions were possible future directions and the role of journalism and the importance of good journalism to audit algorithms and make them accountable to the public. The keynote speaker was Julia Angwin who was the first journalist to report on the COMPAS recidivism prediction tool bias. The COMPAS dataset bias was one of the issues that made the topic of algorithmic fairness gain some traction and that is still commonly used in the literature of Fairness in Machine Learning. Julia is currently in charge of The Markup, an independent and not-for-profit newsroom that focuses on data-driven journalism.
The different discussions enabled in the conference gave me some space to look at my own work and critically reflect on what I am doing, why I am doing it and the approach that I am taking, which is a conversation with myself that is still in process. It was not necessarily the very interesting research that was presented, but the deep discussions that had taken place that made my attendance of FAccT 2021 an enriching experience.
Here are some of my favourite papers of the conference:
Many of us are spending a lot of time in Teams meetings. One challenge of remote working is the reduced ability to express, and pick up, subtle body language and facial cues, which can contribute to difficulty communicating – even before broadband connection comes into play.
Microsoft launched Reactions in Teams in December, which allows us to show a reaction while someone else is talking.
This is great, and people in meetings I’ve been in have found it really helpful. However, there are currently very limited options to emote. We can either like (thumbs up), love (heart), clap or laugh. Or put our hand up.
Great!…but we can’t use it to express different emotions. In particular, all the reactions are positive. This may contribute to pleasant team meetings – but risks contributing to ‘groupthink‘. The ability to convey uncertainty, or dissatisfaction, or frustration, are important social signals often communicated through subtle facial cues, which on a Teams call may be impossible to spot. If I’m not feeling comfortable for some reason in a Teams call, my only options are to speak out verbally, or keep schtumm, or use the comments (which a speaker may not see).
I was recently in an excellent session on Challenging conversations – having a visual way to challenge statements, may add to verbal intervention as a way to signal that something is not OK.
Zoom and Slack have a much wider range. Taking Zoom as an example – more, still quite positive, but with the ability to thumbs down or say ‘No’:
So how we do extend the range of emotional expression in Teams? Microsoft say they’re working on extending the range, but there isn’t a timescale.
Someone has created a technical solution, but it needs to be set up by sysadmins in the organisation (example).
I’ve come across other ways for signalling emotions, including non technical – for example, some teaching staff encourage students to use their Teams/Zoom background, or even their clothes, to signal how they’re feeling (red or amber for different shades of ‘I’ve got some concerns’).
From a discussion at a team meeting I decided to try and solve this problem, using Snap filters. The brief here was to create a filter that allowed a wider range of emotes, presented in the same style as the existing Teams reactions, and in particular to plug the gaps in current reactions around expressing uncertainty or concern.
I present – the Emoji Board! Use the link to access, or scan the following with Snapchat:
Using this with Snap camera allows the following emotes, presented in the same style as Teams reactions (appear on screen for 3 seconds, centred, transparent background)
The filter should be usable on mobile phones but is optimised for use with Teams (or Zoom) on a laptop. To use, it click on the sides of the Snap camera screen to pop out the emotes.
Within the last decade or so, cyberincidents have made headlines and have become top strategic risk factors for enterprises. These incidents have not spared even high-profile enterprises and government bodies. Despite significant investments in cyberdefense, these entities are still considered soft targets by attackers. It has become clear that the weakest link in the security chain is the human factor.
Negligence is a key aspect of human fallibility. Employees and contractors fail to heed security training, enterprise policies, and applicable laws and regulations, which may be regarded as mere check-the-box exercises at the time of joining the enterprise. Negligent insiders are responsible for 62 percent of cyberincidents.1
Consequently, cybersecurity management can no longer be treated as something distinct from the business or as merely an IT department issue. Senior leadership must enhance the enterprise’s cybersecurity strategy by ensuring a security risk-aware culture and working with employees, contractors, regulators, peer organizations and third-party suppliers to reduce the risk of cyberincidents. Ownership of cybersecurity risk at the top helps secure the trust and confidence of all stakeholders while setting the appropriate tone.2
Intra- vs. Cross-Organizational Cybersecurity Management
The growing number of insider threats, the expanding regulatory requirements to safeguard personal and sensitive data, the complexity of responding to changing attack vectors, and the pressure created by these circumstances demand a shift in cybersecurity management from intraorganizational to cross-organizational. In cross-organizational cybersecurity management, the sharing of threat intelligence is of paramount importance.3 This includes information that can mitigate insider threats, such as background checks to determine credit rating, employment history and criminal convictions. Intraorganizational cybersecurity management, in contrast, caters to a noncollaborative and independent type of security management,4 which leads to a siloed approach and enables insider threats to materialize and expand effortlessly.
The Human Side of Organizations
The human work motivation and management Theory Y proposes an environment in which leading by example extends respect, dignity and inspiration to employees, encouraging them to become ethical and disciplined in accepting and conforming to the enterprise’s security culture.5 In contrast, Theory X takes a cynical view of human nature and leads to an adversarial relationship between leaders and employees.6 Social learning theory suggests that weak leadership is to blame for an apathetic and uncooperative workforce; thus top management should be held accountable for the security culture, ensuring its acceptance by articulating its core ethical values and principles through verbal expressions and reminders.7
Consider the example of a security audit conducted in a Theory X vs. Theory Y enterprise. In a Theory X enterprise, there is a bureaucratic chain of command. The auditor discovers a problem and reports it to the information security officer. The security officer passes the information on to the department head, who, in turn, informs the team leader of the non-compliance issue. The team leader summons the employee or employees closest to the source of the problem. This creates a confrontational environment because the employees may have been unaware that their activities were being audited.
In a Theory Y enterprise, the auditor collaborates with the relevant employees when setting the objectives of the audit and engages them directly when a problem is discovered, thus enabling them to own and address the problem. The auditor’s report still climbs the official ladder, but by the time it arrives at the top, the employees have already taken the appropriate steps to mitigate the issue. Employees appreciate feedback from the top and recognize that the enterprise is not interested in punishing them. Such an up-front approach creates mutual trust, respect and an improved security culture.
Conclusion and Recommendations
Most enterprise leaders are not experts in cybersecurity management, but such expertise is not required to make effective decisions. Leaders should take the following steps:
Train employees properly, and make sure that they are aware of proper procedures. This goes a long way in mitigating cybersecurity risk and improving the enterprise’s security posture.
Integrate human resources management processes into the cybersecurity strategy to identify and address any potential insider threats that could lead to data breaches and result in regulatory fines, damage to business reputation and financial losses. The motive is not always financial gain; it could be vengeance on the part of a disgruntled employee or contractor due to a denied promotion, unfair treatment or poor working conditions. Although malicious acts constitute only 23 percent of all incidents, their impact can be far reaching.8
Create a security culture that belongs to everyone, articulate security goals and monitor the enterprise’s security posture from the outset. An enterprise’s security culture dictates the behavior of its employees and the enterprise’s success in sustaining an adequate security posture.
Ensure that the security culture is inclusive and permeates all parts of the enterprise.
Foster transparency, develop trust and enhance communications in both directions (bottom up and top down), which will facilitate collaborative ideas, better coordination and positive results.
Nevertheless, ownership of cybersecurity risk at the top is key to getting the security culture right and fostering the desired security behaviors.
“AN ENTERPRISE’S SECURITY CULTURE DICTATES THE BEHAVIOR OF ITS EMPLOYEES AND THE ENTERPRISE’S SUCCESS IN SUSTAINING AN ADEQUATE SECURITY POSTURE.”
Author’s Note
The views expressed in this article are the author’s views and do not represent those of the organization or the professional bodies with which he is associated.
Endnotes
Ponemon Institute, 2020 Cost of Insider Threats Global Report, USA, 2020, https://www.proof point.com/us/resources/threat-reports/ 2020-cost-of-insider-threats
Bandura, A.; “Social Cognitive Theory: An Agentic Perspective,” Annual Review of Psychology, vol. 52, February 2001, https://www.annualreviews.org/doi/abs/ 10.1146/annurev.psych.52.1.1
Abiteboul, S.; R. Agrawal; P. Bernstein; M. Carey; S. Ceri; B. Croft; D. DeWitt; M. Franklin; H. Garcia Molina; D. Gawlick; et al.; “The Lowell Database Research Self-Assessment,” Communications of the ACM, vol. 48, iss. 5, May 2005, http://dl.acm.org/citation.cfm?doid= 1060710.1060718
Settanni, G.; F. Skopik; Y. Shovgenya; R. Fiedler; M. Carolan; D. Conroy; K. Boettinger; M. Gall; G. Brost; C. Ponchel; M. Haustein; H. Kaufmann; K. Theuerkauf; P. Olli; “A Collaborative Cyber Incident Management System for European Interconnected Critical Infrastructures,” Journal of Information Security and Applications, vol. 34, part 2, June 2017, p. 166–182, https://www.sciencedirect.com/science/article/abs/pii/S2214212616300576
McGregor, D. M.; Human Side of Enterprise, McGraw-Hill, USA, 1957
The summer school programme I enrolled on during this year’s summer was the 3rd edition of the International Summer School Programme on Artificial Intelligence with the theme Artificial Intelligence from Deep Learning to Data Analytics (AI-DLDA 2020).
The organisers of the program were the University of Udine, Italy, in partnership with Digital Innovation Hub Udine, Italian Association of Computer Vision Pattern Recognition and Machine Learning (CVPL), Artificial Intelligence and Intelligent Systems National Lab, AREA Science Park and District of Digital Technologies ICT regional cluster (DITEDI).
Usually, the AI-DLDA summer school program is held in Udine, Italy, however, following the development of the COVID-19 situation, this year’s edition of the AI-DLDA summer school program was held totally online via an educational platform and it lasted for 5 days starting from Monday 29th June until Friday 3 July 2020. There were about 32 PhD students from over 8 different countries participating in the summer school as well as masters students, researchers from across the world and several industry practitioners from Italian industries and Ministers.
School Structure
The school program was organised and structured into four intensive teaching days with keynote lectures in the morning sessions and practical workshops in the afternoon sessions. The keynote lectures were delivered by 8 different international speakers from top universities and high profile organisations. Both lectures and lab workshop sessions were delivered via a dedicated online classroom.
Key Note Lectures and Workshops
Day 1, Lecture 1:The first keynote lecture delivered was on the theme, Cyber Security and Deep Fake Technology: Current Trends and Perspectives
Deep Fake Technology are multimedia contents which are created or synthetically altered using machine learning generative models. These synthetically derived multimedia contents are popularly termed as ’Deep Fakes’. It was stated that with the current rise in ’Deep Fakes’ and synthetic media content, the historical belief that images, video and audio are reliable records of reality is no longer tenable. The image in figure 1 below shows an example of Deep Fake phenomenon.
The research on Deep Fake technology shows that the deep fake phenomenon is growing rapidly online with the number of fake videos doubling over the past year. It is reported that the increase in deep fakes is sponsored by the growing ubiquity of tools and services that have reduced the barrier and enabled novices to create deep fakes. The machine learning models used in creating or modifying such multimedia content are Generative Adversarial Fusion Networks (GANs). Variants of the techniques include StarGANs and StyleGANs.
The speakers presented their own work which focused on detecting deep fakes by analyzing convolutional traces [5]. In their work they focused on the analysing images of human faces,by trying to detect convolutional traces hidden in those images: a sort of fingerprint left throughout the image generation process. They propose a new Deep fake detection technique based on the Expectation Maximization algorithm. Their method outperformed current methods and proved to be effective in detecting fake images of human faces generated by recent GAN architectures.
This lecture was really insightful for me because I got the opportunity to learn about Generative Adversarial Networks and to understand their architectures and applications in real-world directly from leading researchers.
Day 1, Lecture 2:Petia Radeva from the University of Barcelona gave a lecture on Food Recognition The presentation discussed Uncertainty modeling for food analysis within end-to-end framework. They treated the food recognition problem as a Multi-Task Learning (MTL) problem as identifying foods automatically from different cuisines across the world is challenging due to the problem of uncertainty. The MTL Learning problem is shown in figure 2 below. The presentation introduced aleatoric uncertainty modelling to address the problem of uncertainty and to make the food image recognition model smarter [2].
Day 1, Lecture 3: The final keynote lecture on day 1 focused on Robotics, on the topic: Learning Vision-based, Agile Drone Flight: from Frames to Event Cameras which was delivered by Davide Scaramuzza from University of Zurich.
He presented on several cutting edge research in the field of robotics including Real time, Onboard Computer Vision and Control for Autonomous, Agile Drone Flight [3]. Figure 3 below shows autonomous drone racing from a single flight demonstration.
The presentation also involved an update of their curent research on the open challenges of Computer vision, arguing that the past 60 years of research have been devoted to frame based cameras, which arguably are not good enough. Therefore, proposing event -based cameras as a more efficient and effective alternative as they do not suffer from the problems faced by frame based cameras. [4]
Day 1, Workshop Labs:During the first workshop we had practical introduction to the Pytorch Deep Learning Framework and Google Colab Environment. This was led by Dott. Lorenzo Baraldi from University of Modena and Reggio Emilia.
Day 2, Lecture 1: Prof. Di Stefano, gave a talk on Scene perception and Unlabelled data using Deep Convolutional Neural Networks. His lecture focused on depth estimation by stereo vision and the performance of computer vision models against the bench marked Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) data set. He also discussed novel advancements in the methods used in solving computer vision problems such as Monocular Depth Estimation problem, proposing that this can be solved via Transfer learning. [10].
Day 2, Lecture 2:In addition, Prof. Cavallaro from Queen Mary University of London delivered a lecture on Robust and privacy-preserving multi-modal learning with body cameras.
Lab2 – Part I:Sequence understanding and generation still led by (Dott. Lorenzo Baraldi, University of Modena and Reggio Emilia)
Lab2 – Part II: Focused on Deep reinforcement learning for control (Dott. Matteo Dunnhofer, University of Udine)
Day 3, Lecture 1 keynote lecture focused on Self Supervision Self-supervised Learning: Getting More for Less Out of your CNNs by Prof. Badganov from University of Florence. In his lecture he discussed self-supervised representation learning and self-supervision for niche problems [6].
Day 3 lecture 2was done by keynote speaker, Prof. Samek from Fraunhofer Heinrich Hertz Institute on a hot topic in the field of Artificial Intelligence on Explainable AI: Methods, Applications and Extensions.
The lecture covered an overview of current AI explanation methods and examples of real-world applications of the methods. We learnt from the lecture that AI explanation methods can be divided into four categories namely perturbation based methods, Function based methods, Surrogate based methods and Structure based methods. We learnt that structure-based methods such as Layer-wise Relevance Propagation (LRP) [1] and Deep Taylor Decomposition [7] are to be preferred over function-based methods as they are computationally fast and do not suffer the problems of the other types of methods. Figure 4 shows details of the layer-wise decomposition technique.
Overall, it was concluded that the decision functions of machine learning algorithms are often complex and analyzing them can be difficult. Nevertheless, levering the model’s structure can simplify the explanation problem. [9].
Lab3 – Part I:Lab 3 covered Going Beyond Convolutional Neural Networks for Computer Vision led by Niki Martinel and Rita Pucci from University of Udine
Lab3 – Part II: Going Beyond Convolutional Neural Networks for Computer Vision (Dott. Niki Martinel and Dott.ssa Rita Pucci, University of Udine)
Day 4: The final keynote lecture was done by Prof. Frontoni on Human Behaviour Analysis. This talk concentrated on the study of human behaviours specifically Deep Understanding of shopper behaviours and interactions using computer vision in the retail environment [8]. The presentation showed experiments conducted using different shopping data sets for tackling different retail problems including user interaction classification, person re-identification, weight estimation and human trajectory prediction using multiple store data sets.
The second part of the morning section on Day 4 was open for PhD students to present their research works to participants on the program.
Lab4 – Part I: Machine and Deep Learning for Natural Language Processing(Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)
Lab4– Part II: Machine and Deep Learning for Natural Language Processing (Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)
Concluding Remarks
The summer school programme offered us the benefit of interacting directly with world leaders in Artificial Intelligence. The insightful presentations from leading AI experts updated us about the most recent advances in the area of Artificial Intelligence, ranging from deep learning to data analytics right from the comfort of our homes.
The keynote lectures from world leaders provided an in-depth analysis of the state-of-the-art research and covered a large spectrum of current research activities and industrial applications dealing with big data, computer vision, human-computer interaction, robotics, cybersecurity in deep learning and artificial intelligence. Overall, the summer school program was an enlightening and enjoyable learning experience.
References
Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert Mu¨ller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
Marc Bolan˜os, Marc Valdivia, and Petia Radeva. Where and what am i eating? image-based food menu recognition. In European Conference on Computer Vision, pages 590–605. Springer, 2018.
Davide Falanga, Kevin Kleber, Stefano Mintchev, Dario Floreano, and Davide Scaramuzza. The foldable drone: A morphing quadrotor that can squeeze and fly. IEEE Robotics and Automation Letters, 4(2):209–216, 2018.
Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, J¨org Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. arXiv preprint arXiv:1904.08405, 2019.
Luca Guarnera, Oliver Giudice, and Sebastiano Battiato. Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 666–667, 2020.
Xialei Liu, Joost Van De Weijer, and Andrew D Bagdanov. Exploiting unlabeled data in cnns by self-supervised learning to rank. IEEE transactions on pattern analysis and machine intelligence, 41(8):1862–1878, 2019.
Gr´egoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Mu¨ller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211–222, 2017.
Marina Paolanti, Rocco Pietrini, Adriano Mancini, Emanuele Frontoni, and Primo Zingaretti. Deep understanding of shopper behaviours and interactions using rgb-d vision. Machine Vision and Applications, 31(7):1–21, 2020.
Wojciech Samek, Gr´egoire Montavon, Sebastian Lapuschkin, Christopher J Anders, and Klaus-Robert Mu¨ller. Toward interpretable machine learning: Transparent deep neural networks and beyond. arXiv preprint arXiv:2003.07631, 2020.
Alessio Tonioni, Matteo Poggi, Stefano Mattoccia, and Luigi Di Stefano. Unsupervised adaptation for deep stereo. In Proceedings of the IEEE International Conference on Computer Vision, pages 1605–1613, 2017.
Hello everyone! I’m a 3rd year Horizon CDT PhD student partnered with the Defence Science and Technology Laboratory (Dstl). My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.
For this study, I am looking for participants to take part in my short online study on detecting fake aerial images. We have used Generative Adversarial Networks (GANs) to create these.
I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (e.g. aerial imagery, satellite images) or GAN-generated images.
Purpose:To assess the difficulty in the task of distinguishing GAN-generated fake images from real aerial photos of rural and urban environments. This is part of a larger PhD project looking at the generation and detection of fake earth observation data.
Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.
Commitment: The study should take between 5-15 minutes to complete and is hosted online on pavlovia.org
How to participate? Read through this Information sheet and follow the link to the study at the end.