Addressing the Human Fallibility That Leads to a Data Breach

Article written by Farid Vayani (2020 cohort)
Originally published in the ISACA Journal


Within the last decade or so, cyberincidents have made headlines and have become top strategic risk factors for enterprises. These incidents have not spared even high-profile enterprises and government bodies. Despite significant investments in cyberdefense, these entities are still considered soft targets by attackers. It has become clear that the weakest link in the security chain is the human factor.

Negligence is a key aspect of human fallibility. Employees and contractors fail to heed security training, enterprise policies, and applicable laws and regulations, which may be regarded as mere check-the-box exercises at the time of joining the enterprise. Negligent insiders are responsible for 62 percent of cyberincidents.1

Consequently, cybersecurity management can no longer be treated as something distinct from the business or as merely an IT department issue. Senior leadership must enhance the enterprise’s cybersecurity strategy by ensuring a security risk-aware culture and working with employees, contractors, regulators, peer organizations and third-party suppliers to reduce the risk of cyberincidents. Ownership of cybersecurity risk at the top helps secure the trust and confidence of all stakeholders while setting the appropriate tone.2

Intra- vs. Cross-Organizational Cybersecurity Management

The growing number of insider threats, the expanding regulatory requirements to safeguard personal and sensitive data, the complexity of responding to changing attack vectors, and the pressure created by these circumstances demand a shift in cybersecurity management from intraorganizational to cross-organizational. In cross-organizational cybersecurity management, the sharing of threat intelligence is of paramount importance.3 This includes information that can mitigate insider threats, such as background checks to determine credit rating, employment history and criminal convictions. Intraorganizational cybersecurity management, in contrast, caters to a noncollaborative and independent type of security management,4 which leads to a siloed approach and enables insider threats to materialize and expand effortlessly.

The Human Side of Organizations

The human work motivation and management Theory Y proposes an environment in which leading by example extends respect, dignity and inspiration to employees, encouraging them to become ethical and disciplined in accepting and conforming to the enterprise’s security culture.5 In contrast, Theory X takes a cynical view of human nature and leads to an adversarial relationship between leaders and employees.6 Social learning theory suggests that weak leadership is to blame for an apathetic and uncooperative workforce; thus top management should be held accountable for the security culture, ensuring its acceptance by articulating its core ethical values and principles through verbal expressions and reminders.7

Consider the example of a security audit conducted in a Theory X vs. Theory Y enterprise. In a Theory X enterprise, there is a bureaucratic chain of command. The auditor discovers a problem and reports it to the information security officer. The security officer passes the information on to the department head, who, in turn, informs the team leader of the non-compliance issue. The team leader summons the employee or employees closest to the source of the problem. This creates a confrontational environment because the employees may have been unaware that their activities were being audited.

In a Theory Y enterprise, the auditor collaborates with the relevant employees when setting the objectives of the audit and engages them directly when a problem is discovered, thus enabling them to own and address the problem. The auditor’s report still climbs the official ladder, but by the time it arrives at the top, the employees have already taken the appropriate steps to mitigate the issue. Employees appreciate feedback from the top and recognize that the enterprise is not interested in punishing them. Such an up-front approach creates mutual trust, respect and an improved security culture.

Conclusion and Recommendations

Most enterprise leaders are not experts in cybersecurity management, but such expertise is not required to make effective decisions. Leaders should take the following steps:

    • Train employees properly, and make sure that they are aware of proper procedures. This goes a long way in mitigating cybersecurity risk and improving the enterprise’s security posture.
    • Integrate human resources management processes into the cybersecurity strategy to identify and address any potential insider threats that could lead to data breaches and result in regulatory fines, damage to business reputation and financial losses. The motive is not always financial gain; it could be vengeance on the part of a disgruntled employee or contractor due to a denied promotion, unfair treatment or poor working conditions. Although malicious acts constitute only 23 percent of all incidents, their impact can be far reaching.8
    • Create a security culture that belongs to everyone, articulate security goals and monitor the enterprise’s security posture from the outset. An enterprise’s security culture dictates the behavior of its employees and the enterprise’s success in sustaining an adequate security posture.
    • Ensure that the security culture is inclusive and permeates all parts of the enterprise.
    • Foster transparency, develop trust and enhance communications in both directions (bottom up and top down), which will facilitate collaborative ideas, better coordination and positive results.

Nevertheless, ownership of cybersecurity risk at the top is key to getting the security culture right and fostering the desired security behaviors.

“AN ENTERPRISE’S SECURITY CULTURE DICTATES THE BEHAVIOR OF ITS EMPLOYEES AND THE ENTERPRISE’S SUCCESS IN SUSTAINING AN ADEQUATE SECURITY POSTURE.”

Author’s Note

The views expressed in this article are the author’s views and do not represent those of the organization or the professional bodies with which he is associated.

Endnotes

  1. Ponemon Institute, 2020 Cost of Insider Threats Global Report, USA, 2020, https://www.proof point.com/us/resources/threat-reports/ 2020-cost-of-insider-threats
  2. Bandura, A.; “Social Cognitive Theory: An Agentic Perspective,” Annual Review of Psychology, vol. 52, February 2001, https://www.annualreviews.org/doi/abs/ 10.1146/annurev.psych.52.1.1
  3. Abiteboul, S.; R. Agrawal; P. Bernstein; M. Carey; S. Ceri; B. Croft; D. DeWitt; M. Franklin; H. Garcia Molina; D. Gawlick; et al.; “The Lowell Database Research Self-Assessment,” Communications of the ACM, vol. 48, iss. 5, May 2005, http://dl.acm.org/citation.cfm?doid= 1060710.1060718
  4. Settanni, G.; F. Skopik; Y. Shovgenya; R. Fiedler; M. Carolan; D. Conroy; K. Boettinger; M. Gall; G. Brost; C. Ponchel; M. Haustein; H. Kaufmann; K. Theuerkauf; P. Olli; “A Collaborative Cyber Incident Management System for European Interconnected Critical Infrastructures,” Journal of Information Security and Applications, vol. 34, part 2, June 2017, p. 166–182, https://www.sciencedirect.com/science/article/abs/pii/S2214212616300576
  5. McGregor, D. M.; Human Side of Enterprise, McGraw-Hill, USA, 1957
  6. Ibid.
  7. Op cit Bandura
  8. Op cit Ponemon Institute

International Summer School Programme on Artificial Intelligence

post by Edwina Abam (2019 cohort)

Introduction

The summer school programme I enrolled on during this year’s summer was the 3rd edition of the International Summer School Programme on Artificial Intelligence with the theme Artificial Intelligence from Deep Learning to Data Analytics (AI-DLDA 2020).

The organisers of the program were the University of Udine, Italy, in partnership with Digital Innovation Hub Udine, Italian Association of Computer Vision Pattern Recognition and Machine Learning (CVPL), Artificial Intelligence and Intelligent Systems National Lab, AREA Science Park and District of Digital Technologies ICT regional cluster (DITEDI).

Usually, the AI-DLDA summer school program is held in Udine, Italy, however, following the development of the COVID-19 situation, this year’s edition of the AI-DLDA summer school program was held totally online via an educational platform and it lasted for 5 days starting from Monday 29th June until Friday 3 July 2020. There were about 32 PhD students from over 8 different countries participating in the summer school as well as masters students, researchers from across the world and several industry practitioners from Italian industries and Ministers.

School Structure

The school program was organised and structured into four intensive teaching days with keynote lectures in the morning sessions and practical workshops in the afternoon sessions. The keynote lectures were delivered by 8 different international speakers from top universities and high profile organisations. Both lectures and lab workshop sessions were delivered via a dedicated online classroom.

Key Note Lectures and Workshops

Day 1, Lecture 1: The first keynote lecture delivered was on the theme, Cyber Security and Deep Fake Technology: Current Trends and Perspectives

Deep Fake Technology are multimedia contents which are created or synthetically altered using machine learning generative models. These synthetically derived multimedia contents are popularly termed as ’Deep Fakes’. It was stated that with the current rise in ’Deep Fakes’ and synthetic media content, the historical belief that images, video and audio are reliable records of reality is no longer tenable. The image in figure 1 below shows an example of Deep Fake phenomenon.

The research on Deep Fake technology shows that the deep fake phenomenon is growing rapidly online with the number of fake videos doubling over the past year. It is reported that the increase in deep fakes is sponsored by the growing ubiquity of tools and services that have reduced the barrier and enabled novices to create deep fakes. The machine learning models used in creating or modifying such multimedia content are Generative Adversarial Fusion Networks (GANs). Variants of the techniques include StarGANs and StyleGANs.

Figure 1: Deep Fake Images

The speakers presented their own work which focused on detecting deep fakes by analyzing convolutional traces [5]. In their work they focused on the analysing images of human faces,by trying to detect convolutional traces hidden in those images: a sort of fingerprint left throughout the image generation process. They propose a new Deep fake detection technique based on the Expectation Maximization algorithm. Their method outperformed current methods and proved to be effective in detecting fake images of human faces generated by recent GAN architectures.

This lecture was really insightful for me because I got the opportunity to learn about Generative Adversarial Networks and to understand their architectures and applications in real-world directly from leading researchers.

Day 1, Lecture 2: Petia Radeva from the University of Barcelona gave a lecture on Food Recognition The presentation discussed Uncertainty modeling for food analysis within end-to-end framework. They treated the food recognition problem as a Multi-Task Learning (MTL) problem as identifying foods automatically from different cuisines across the world is challenging due to the problem of uncertainty. The MTL Learning problem is shown in figure 2 below. The presentation introduced aleatoric uncertainty modelling to address the problem of uncertainty and to make the food image recognition model smarter [2].

Figure 2: Food Image Recognition Problem

Day 1, Lecture 3: The final keynote lecture on day 1 focused on Robotics, on the topic: Learning Vision-based, Agile Drone Flight: from Frames to Event Cameras which was delivered by Davide Scaramuzza from University of Zurich.

He presented on several cutting edge research in the field of robotics including Real time, Onboard Computer Vision and Control for Autonomous, Agile Drone Flight [3]. Figure 3 below shows autonomous drone racing from a single flight demonstration.

Figure 3: Autonomous Drone Racing

The presentation also involved an update of their curent research on the open challenges of Computer vision, arguing that the past 60 years of research have been devoted to frame based cameras, which arguably are not good enough. Therefore, proposing event -based cameras as a more efficient and effective alternative as they do not suffer from the problems faced by frame based cameras. [4]

Day 1, Workshop Labs: During the first workshop we had practical introduction to the Pytorch Deep Learning Framework and Google Colab Environment. This was led by Dott. Lorenzo Baraldi from University of Modena and Reggio Emilia.

Day 2, Lecture 1: Prof. Di Stefano, gave a talk on Scene perception and Unlabelled data using Deep Convolutional Neural Networks. His lecture focused on depth estimation by stereo vision and the performance of computer vision models against the bench marked Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) data set. He also discussed novel advancements in the methods used in solving computer vision problems such as Monocular Depth Estimation problem, proposing that this can be solved via Transfer learning. [10].

Day 2, Lecture 2: In addition, Prof. Cavallaro from Queen Mary University of London delivered a lecture on Robust and privacy-preserving multi-modal learning with body cameras.

Lab2 – Part I: Sequence understanding and generation still led by (Dott. Lorenzo Baraldi, University of Modena and Reggio Emilia)

Lab2 – Part II: Focused on Deep reinforcement learning for control (Dott. Matteo Dunnhofer, University of Udine)

Day 3, Lecture 1 keynote lecture focused on Self Supervision Self-supervised Learning: Getting More for Less Out of your CNNs by Prof. Badganov from University of Florence. In his lecture he discussed self-supervised representation learning and self-supervision for niche problems [6].

Day 3 lecture 2 was done by keynote speaker, Prof. Samek from Fraunhofer Heinrich Hertz Institute on a hot topic in the field of Artificial Intelligence on Explainable AI: Methods, Applications and Extensions.

The lecture covered an overview of current AI explanation methods and examples of real-world applications of the methods. We learnt from the lecture that AI explanation methods can be divided into four categories namely perturbation based methods, Function based methods, Surrogate based methods and Structure based methods. We learnt that structure-based methods such as Layer-wise Relevance Propagation (LRP) [1] and Deep Taylor Decomposition [7] are to be preferred over function-based methods as they are computationally fast and do not suffer the problems of the other types of methods. Figure 4 shows details of the layer-wise decomposition technique.

Figure 4: Layer-wise Relevance Propagation

Overall, it was concluded that the decision functions of machine learning algorithms are often complex and analyzing them can be difficult. Nevertheless, levering the model’s structure can simplify the explanation problem. [9].

Lab3 – Part I: Lab 3 covered Going Beyond Convolutional Neural Networks for Computer Vision led by Niki Martinel and Rita Pucci from University of Udine

Lab3 – Part II: Going Beyond Convolutional Neural Networks for Computer Vision (Dott. Niki Martinel and Dott.ssa Rita Pucci, University of Udine)

Day 4: The final keynote lecture was done by Prof. Frontoni on Human Behaviour Analysis. This talk concentrated on the study of human behaviours specifically Deep Understanding of shopper behaviours and interactions using computer vision in the retail environment [8]. The presentation showed experiments conducted using different shopping data sets for tackling different retail problems including user interaction classification, person re-identification, weight estimation and human trajectory prediction using multiple store data sets.

The second part of the morning section on Day 4 was open for PhD students to present their research works to participants on the program.

Lab4 Part I: Machine and Deep Learning for Natural Language Processing(Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)

Lab4 – Part II: Machine and Deep Learning for Natural Language Processing (Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)

Concluding Remarks

The summer school programme offered us the benefit of interacting directly with world leaders in Artificial Intelligence. The insightful presentations from leading AI experts updated us about the most recent advances in the area of Artificial Intelligence, ranging from deep learning to data analytics right from the comfort of our homes.

The keynote lectures from world leaders provided an in-depth analysis of the state-of-the-art research and covered a large spectrum of current research activities and industrial applications dealing with big data, computer vision, human-computer interaction, robotics, cybersecurity in deep learning and artificial intelligence. Overall, the summer school program was an enlightening and enjoyable learning experience.

References

  • Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert Mu¨ller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
  • Marc Bolan˜os, Marc Valdivia, and Petia Radeva. Where and what am i eating? image-based food menu recognition. In European Conference on Computer Vision, pages 590–605. Springer, 2018.
  • Davide Falanga, Kevin Kleber, Stefano Mintchev, Dario Floreano, and Davide Scaramuzza. The foldable drone: A morphing quadrotor that can squeeze and fly. IEEE Robotics and Automation Letters, 4(2):209–216, 2018.
  • Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, J¨org Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. arXiv preprint arXiv:1904.08405, 2019.
  • Luca Guarnera, Oliver Giudice, and Sebastiano Battiato. Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 666–667, 2020.
  • Xialei Liu, Joost Van De Weijer, and Andrew D Bagdanov. Exploiting unlabeled data in cnns by self-supervised learning to rank. IEEE transactions on pattern analysis and machine intelligence, 41(8):1862–1878, 2019.
  • Gr´egoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Mu¨ller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211–222, 2017.
  • Marina Paolanti, Rocco Pietrini, Adriano Mancini, Emanuele Frontoni, and Primo Zingaretti. Deep understanding of shopper behaviours and interactions using rgb-d vision. Machine Vision and Applications, 31(7):1–21, 2020.
  • Wojciech Samek, Gr´egoire Montavon, Sebastian Lapuschkin, Christopher J Anders, and Klaus-Robert Mu¨ller. Toward interpretable machine learning: Transparent deep neural networks and beyond. arXiv preprint arXiv:2003.07631, 2020.
  • Alessio Tonioni, Matteo Poggi, Stefano Mattoccia, and Luigi Di Stefano. Unsupervised adaptation for deep stereo. In Proceedings of the IEEE International Conference on Computer Vision, pages 1605–1613, 2017.

Call for Participants: Mental workload in daily life

Fourth-year Horizon CDT PhD student Serena Midha is recruiting participants to take part in a research study.

Serena is researching mental workload from a daily life perspective. Serena and her team are aiming to gather a full 5 days of subjective workload levels, as well as data on what activities were being done to generate these ratings. They also want to further their understanding of people’s personal experiences of mental workload.


Participant requirements:

      • Android Users
      • Office workers outside of academia
      • Without clinical history of anxiety or depression

Participants will be offered £75 for participating in the study.

More information about the study can be found here.

You can contact Serena with any queries.

 

You can check out Serena’s Research Highlights here: https://highlights.cdt.horizon.ac.uk/students/psxckta

 

 

Attitudes and Experiences with Loan Applications – Participants needed

post by Ana Rita Pena (2019 cohort)

My PhD is investigating how technologies to protect individual’s privacy in automated decision for loan applications impact people. Within the scope of this broad topic, I am interested in personal experiences of loan applications in regard to trust, fairness and decision making.

I am currently recruiting for my next study: Attitudes and experiences with Loan Applications: UK context.  The study is made up of a 45 minute interview (held online) and a follow-up online survey.

This study aims to understand how people feel about loan applications, data sharing in this context, and how well they understand the decision process behind these decisions.

We will be focusing on personal loans in particular. Participants will not have to disclose specific information about the loan they applied for (in regards to monetary value for example) but are invited to reflect on their experiences.

I am looking to recruit people who:
— are over the age of 18
— have applied for a loan in the UK
— are proficient in English
— able to provide consent to their participation

Participation in the study will be compensated with a £15 online shopping voucher.

More information about the interview study can be found here.

If you have any further questions or are interested in participating,  don’t hesitate to contact me at ana.pena@nottingham.ac.uk

Thank you!

Ana Rita Pena

You can read more about Rita’s research project here.

SoDis

Pepita Barnard is a Research Associate at the Horizon Digital Economy Research and has recently submitted her PhD thesis.


post by Pepita Barnard (2014 cohort)

I am excited to be working with Derek McAuley, James Pinchin and Dominic Price from Horizon on a Social Distancing (SoDis) research project. We aim to understand how individuals act when given information indicating concentrations of people, and thus busyness of places.

We are employing a privacy-preserving approach to the project data collected from mobile device WiFi probe signals’ data. With the permission of buildings’ managers and relevant Heads of Schools, the SoDis Counting Study will deploy WISEBoxes in a limited number of designated University buildings, gather the relevant data from the Cisco DNA Spaces platform, which the University has implemented across its Wi-Fi network, and undertake a gold standard human-count.

What are WISEBoxes? There’s a link for that here

Essentially, WISEBoxes are a sensor platform developed as part of a previous Horizon project, WISEParks. These sensors count the number of Wi-Fi probe requests seen in a time-period (typically 5 minutes) from unique devices (as determined by MAC address). MAC addresses, which could be considered personally identifiable information, are only stored in memory on the WISEBox for the duration of the count (i.e. 5 minutes). The counts, along with some other metadata (signal intensities, timestamp, the WiFi frequency being monitored) are transmitted to a central server hosted on a University of Nottingham virtual machine. No personally identifiable information is permanently stored or recoverable.

We will have ‘safe access’ to Cisco DNA Spaces API, meaning MAC addresses and other identifiers will not be provided to the SoDis research team. The data we gather from Cisco DNA Spaces API will be processed to produce information similar to that gathered by the WISEBoxes, i.e. counts of number of unique users connected to an access point in a period of time.

To develop our ‘busyness’ models, we will also deploy human researchers to count people in designated buildings and spaces. This human-counting element will provide a gold standard for said buildings, at the time of counting. This gold standard can then be modelled against data simultaneously produced from WiFi signal counting methods, producing an estimated level of busyness.

With the help of several research assistants, we will collect 40 hours of human-counting data, illustrating building activity over a typical workweek. We expect to start this human-counting work in the School of Computer Science Building mid-January 2021.

This gold standard human-count will include both a door count and an internal building count. For each designated building, we will have researchers posted at the entrances and exits to undertake door counts. The door counters will tally numbers of people going in and numbers going out within 5-minute intervals using + and – signs. On each floor, researchers will count people occupying rooms and other spaces in the building (e.g., offices, labs, atrium, corridors). Each space will be labelled by room number or name on a tally sheet. Researchers will do two rounds of their assigned floor per hour, checking numbers of people occupying the various spaces. Different buildings will require different arrangements of researchers to enable an accurate count. For example, to cover a school building like Computer Science on Jubilee, we will have 6 researchers counting at any one time.

We expect some of the data collected from the WiFi probes and connections to be spurious (noise), however this is not of concern. Why? Well, to represent busyness, we do not need to worry about exact numbers.

It is accepted that the data may not be accurate, for example, someone’s device may use or send a WiFi probe signal to an access point (AP) or WISEBox in the designated building who is not actually in the building. This potential for inaccuracy is a recognised feature of the privacy-preserving approach we are taking to model busyness for the social distancing tool, SoDis. The researchers undertaking the human-counting study may miss the occasional person roaming the building, but this level of error is not of particular concern. When the human-count is triangulated with the sources of WiFi data, a model of busyness for that space will be produced.

The approach we are testing is relevant not only to our current desire to reduce infection from COVID-19 but may also prove useful to support other health and social causes.

Listening to Digital Dust

Laurence Cliffe (2017 cohort) writes about how the design of analogue music equipment influenced the online interactive experiments in the Science and Media Museum‘s  Sonic Futures project.

The practice of attempting to replicate historical analogue music equipment within the digital domain remains a popular and enduring trend. Some notable examples would include electric guitar tone studios, or digital amplifier simulators, virtual synthesisers and effects plugins for digital audio workstations such as Logic, GarageBand or Cubase.

As expected, such examples attempt to replicate the much-loved nuances of their analogue counterparts, whether that be the warmth of vintage valve amplification and magnetic tape saturation, or the unpredictable, imperfect and organic characteristics of hand-assembled and aged electronic circuitry.

Within the Sonic Futures online interactive exhibits we can hear the sonic artefacts of these hardware related characteristics presented to us within the digital domain of the web; the sudden crackle and hiss of a sound postcard beginning to play and the array of fantastic sounds that can be achieved with Nina Richards’ Echo Machine. After all, who would be without that multi-frequency, whooping gurgle sound you can create by rapidly adjusting the echo time during playback?

Within digital music technology, while this appetite for sonic nostalgia is interesting in itself, we can also see how this desire to digitally replicate an ‘authentic experience’ extends to the way in which these devices are visually represented, and how the musician, music producer or listener is directed to interact with them. Again, we see this in the Sonic Futures online interactive exhibits: the Sound Postcard Player with a visual design reminiscent of a 1950s portable record player; the Echo Machine’s visual appearance drawing upon the design of Roland’s seminal tape-based echo machine of the 1970s, the RE-201 or Space Echo; and Photophonic with its use of retro sci-fi inspired fonts and illustrations.

Roland RE-201 ‘Space Echo’ audio effects unit
Screenshot of online interactive echo machine designed by Nina Richards

We can see even more acute examples of this within some of the other examples given earlier, such as virtual synthesisers and virtual guitar amplifiers, where features such as backlit VU meters (a staple of vintage recording studio equipment) along with patches of rust, paint chips, glowing valves, rotatable knobs, flickable switches and pushable buttons are often included and presented to us in a historically convincing way as an interface through which these devices are to be used.

This type of user-interface design is often referred to as skeuomorphism, and is prevalent within lots of digital environments; the trash icon on your computer’s desktop is a good example (and is often accompanied by the sound of a piece of crunched-up paper hitting the side of metallic bin). Skeuomorphism as a design-style tends to go in and out of fashion. You may perhaps notice the look and feel of your smartphone’s calculator changing through the course of various software updates from one that is to a lesser or greater degree skeuomorphic, to one that is more minimalist and graphical and often referred to as being of a flat design.

Of course, it is only fitting that the Sonic Futures virtual online exhibits seek to sonically and visually reflect the historical music technologies and periods with which they are so closely associated. At a point in time when we are all seeking to create authentic or realistic experiences within the digital domain, whether it be a virtual work meeting or a virtual social meetup with friends and relatives, using the visual and sonic cues of our physical realities within the digital domain reassures us and gives our experience a sense of authenticity.

Along with the perceived sonic authority of original hardware, another notable reason why skeuomorphic design has been so persistent within digital music technology can be explained by the interface heavy environment of the more traditional hardware-based music studio (think of the classic image of the music producer sitting behind a large mixing console with a vast array of faders, buttons, switches dials and level meters). When moving from this physical studio environment to a digital one, in order to facilitate learning, it made sense to make this new digital environment a familiar one.

Sound postcards player

Another possible contributing factor is the relative ease with which the digital versions can be put to use within modern music recording and producing environments: costing a fraction of the hardware versions and taking up no physical space, they can be pressed into action within bedroom studios across the globe. Perhaps this increased level of accessibility generates a self-perpetuating reverence for the original piece of hardware, which is inevitably expensive and hard to obtain, and therefore its visual representation within a digital environment serves as a desirable feature, an authenticating nod to an analogue ancestor.

There are, of course, exceptions to the rule. The digital audio workstation Ableton Live (along with some other DAWs and plugins) almost fully embraces a flat design aesthetic. This perhaps begs the question: what role, if any, does the realistic visual rendering of a piece of audio hardware play in its digital counterpart? What does it offer beyond the quality of the audio reproduction? From the perspective of a digital native (someone who has grown up in the digital age) its function as a way to communicate authenticity is thrown further into question and perhaps it is skeuomorphic design’s potential to communicate the history behind the technology that comes into focus.

Visit EchoSound Postcards and Photophonic to try the online experiments for yourself. You can also read more about the Sonic Futures project.

–originally posted on National Science and Media Museum Blog

Helping you have smarter train journeys

PhD researcher Christian Tamakloe (2016 cohort) is currently recruiting participants to take part in a study to help understand what preparation activities and behaviours result in better travel journeys.


As part of research into the use of personal data in improving the rail passenger experience, I am currently inviting individuals travelling on the train this month (December) to trial a proposed travel companion app aimed at helping rail travellers prepare for how they spend their time during journeys.

The app includes features such as travel information and reminders, as well as records of previous trip experiences.

 Participants will be required to use the app for their upcoming trip, after which they will have to complete a short questionnaire to share their thoughts about the app.

The study is open to anyone above the age of 18 years with some experience of rail travel in the UK. In addition, you will need to be travelling before the 20th of December, 2020.

More information can be found here .

Feel free to contact Christian with any queries: christian.tamakloe@nottingham.ac.uk

Call for Participants: Detecting fake aerial images

PhD researcher Matthew Yates (2018 cohort) is currently recruiting participants to take part in a short online study on detecting fake aerial images. Generative Adversarial Networks (GANs) have been used to create these images.


Hello. I am 3rd year Horizon CDT PhD student partnered with the Dstl. My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.

I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (Satellite aerial images) or GAN-generated images.

Purpose: To assess the difficulty in the task of distinguishing GAN generated fake images from real satellite photos of rural and urban environments.  This is part of a larger PhD project looking at the generation and detection of fake earth observation data.

Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.

Commitment: The study should take between 5-15 minutes to complete and is hosted online on pavlovia.org

How to participate? Read through this Information sheet and follow the link to the study at the end.

 Feel free to contact me with any queries.  Matthew.Yates1@nottingham.ac.uk

 

Continue reading “Call for Participants: Detecting fake aerial images”

A Reflection on The Connected Everything and Smart Products Beacon Summer School 2020

post by Cecily Pepper (2019 cohort)

My first summer school started with an invite via email. Despite my interest in the topic, my first thought was that robotics was not my area of expertise (coming from a social science background), so maybe I shouldn’t bother applying as I’d be out-of-my-depth. Although after some consideration, I thought it would create some great opportunities to meet new people from diverse backgrounds. So, I stopped worrying about my lack of knowledge in the area and just went for it; and I got a place!

The summer school was held digitally due to COVID-19 restrictions, which had both its benefits and pitfalls. On the first day, we were welcomed by Debra Fearnshaw and Professor Steve Benford, and were then given the opportunity to introduce ourselves. From this it was apparent that there was a wide variety of delegates from several universities, with a range of disciplines including social sciences, robotics, engineering and manufacturing. The first day mostly consisted of talks from experts about the challenges we face in connecting technology and the potential of co-robotics within the fields of agrirobotics, home and healthcare. The main task of the summer school was to create a cobot (collaborative robot) that could overcome some of the issues that COVID-19 has created or exacerbated. The issue that the group chose to address had to fall into one of the categories introduced on the first day: food production (agrirobotics), healthcare or home. Along with this challenge, more details were needed on function, technological components, and four key areas of the cobot design: ethics, communication, learning and safety. These aspects were introduced on the second day. After being split into groups at the end of the first day, I felt happy as my group had a range of experience and expertise between us, which I felt would bode well for the challenge as well as being beneficial for myself as I could learn something from everyone.

Similarly, the second day consisted mostly of talks, this time based on the four themes mentioned previously. The ethics discussion was interesting and included in-depth explanations around aspects to consider when reflecting upon the ethical consequences of our designs, such as privacy, law, security and personal ethics. An online activity followed the ethics talk but was soon interrupted by a technical glitch. Despite this, we were able to engage with alternative resources provided in order to reflect upon the ethics of our cobot design. This was useful both for our eventual design, as well as applying this to our own PhD research.

The other themes then followed, including a discussion around interaction and communication in technology. This was an insightful introduction to voice user interfaces and alike, and what the current research is focusing on in this field. While fascinating on its own, it was also useful in thinking about how to apply this to our cobot design, and which features may be useful or necessary for our cobot’s functionality. A talk on the third theme of learning was then delivered, including details about facial recognition and machine learning, and the applications of these in the field of robotics. Likewise, this was useful in reflecting upon how these features may be applicable in our design. Finally, the theme of safety was considered. This talk provided us with the knowledge and ability to consider safety aspects of our cobot, which was particularly apt when considering COVID safety implications too. Overall, the first two days were quite lengthy in terms of screen time (despite some breaks), and I found myself wilting slightly towards the end. However, I think we could all understand and sympathise in the difficulty of minimising screen time when there is a short space of time to complete all of the summer school activities.

On the final day, we split into our teams to create our cobot. This day was personally my favourite part of the summer school, as it was fantastic to work with such a variety of people who all brought different skills to the group. Together, we developed a cobot design and went through the themes from the previous day, ensuring we met the design brief and covered all bases. Probably the biggest challenge was keeping it simple, as we had so many ideas between us. Despite our abundance of ideas, we were strict with ourselves as a group to focus and keep the design simplistic. Additionally, the five-minute presentation time meant that we had to keep our design simple yet effective. We then presented our home assistant cobot, Squishy. Squishy was an inflatable, soft cobot designed to assist carers in lifting patients who were bed-bound (as occupational injuries are a significant problem within the care industry). Squishy’s soft design enabled comfort for the patient being lifted, while the modular design provided a cost-effective solution and the possibility of added-extras if necessary. Along with this, Squishy was beneficial in that it consisted of wipe-clean surfaces to enable effective cleaning in light of COVID-19, as well as aiding social distancing by reducing the need for carer-patient contact. Other features of Squishy included machine-learned skeletal tracking and thermal cameras to aid safe functionality, and minimal personal data collection to maintain ethical standards. After the presentations and following questions, the judges deliberated. Results were in…my team were the winners! While I was happy to have won with my team, the most fruitful part of the experience for me was meeting and learning from others who had different backgrounds, perceptions and ideas.

Overall, I felt the summer school was well-organised and a fantastic opportunity to work with new people from diverse backgrounds, and I was very glad to be a part of it. I’m also pleased I overcame the ‘Imposter Syndrome’ feeling of not believing I would know enough or have enough experience to be a valuable delegate in the summer school. So, my advice to all students would be: don’t underestimate what you can contribute, don’t overthink it, and just go for it; you might end up winning!

The Summer School was funded by EPSRC through the Connected Everything II network plus (EP/S036113/1).

 

COBOT Collaboration for Connected Everything Summer School

post by Angela Thorton (2019 cohort)

Baymax (Hall, D., Williams, C. (2014).

Say hello to Squishy initially inspired by Baymax (Hall, D., Williams, C. 2014). This COBOT concept was co-created during an intensive online Summer School in July 2020 run jointly by Connected Everything and the Smart Products Beacon at the University of Nottingham.

The online event, running over two and a half days, involved 28 delegates from various UK universities and culminated in a brief to design a COVID-ready COBOT (collaborative robot) to work in either Food Production, Healthcare, or the Home. Squishy was the collaborative brainchild of myself and the other five members of my group – the BOTtom Wipers… The group comprised me and Cecily from the 2019 cohort at Horizon CDT and Laurence, Hector, Siya and Robin from Lincoln, Strathclyde, and Edinburgh/Heriot-Watt universities, respectively.

The day and a half leading up to the design brief set the context through a series of related talks on the challenges of working in the different sectors as well as discussions on core aspects such as Ethics, Interaction and Comms, Learning and Safety. Hence by Friday morning, we were ready for our design challenge – to design a COBOT relevant to the COVID world we currently live in and present the concept in five slides lasting five minutes – and to achieve this by mid-afternoon the same day!

Our group quickly worked out to make the most of our individual and different backgrounds ranging from robotics and machine learning to neuroscience and psychology. The challenge we decided on was situated in the home, lifting bed-bound residents since it places considerable physical strain on carers and requires close contact with individuals; obviously less than ideal in a COVID world.

Our solution was Squishy: a cost-effective assistive COBOT inspired by the fictional superhero Baymax (Hall, D., Williams, C. 2014) and the caterpillar robot made using a 3D printer that could output soft, rubbery material and hard material simultaneously (Umedachi, T., Shimizu, M., & Kawahara, Y., 2019).

We decided on a soft, modular COBOT since we felt this would be more comforting and comfortable for the individuals being lifted. Manufacturing costs can limit access to assistive robots so Squishy was inflated using pressurized air with different air pockets allowing his shape to be modified to suit individuals of different body sizes/shapes.  To ensure stability and safety as well as hygiene, we chose a two-body system comprising flexible 3D printed silicon moulds overlaid with wipe clean textile. Being able to keep Squishy clean was critical given COVID.

Our next challenge was to ensure that Squishy could lift and put down an individual safely. We decided to use input from thermal cameras and real-time skeleton tracking using OpenPose since this is a relatively straightforward and cost-effective system. We planned to teach Squishy to hold and lift safely via incremental learning of holding/lifting varied body shapes and weights, either from data sets or by imitation. The use of thermal cameras and skeleton tracking also allowed us to provide two additional modules if required. The first option was temperature screening (37.8 degrees Celsius or greater potentially indicating COVID infection) and the second was for Squishy to gently rock the individual to comfort them if required. A rocking motion has been shown to promote sleep in infants and, more recently, also in adults, (Perrault et al., 2019).

For ease of use and safety we deliberately kept the input and output communications simple namely a wearable control bracelet or necklace with buttons for basic functions e.g. lift up/down as well as an emergency stop button which would signal that assistance was required.

Ethical issues were key, both in terms of the collection and storage of personal data but also the psychological aspects of Squishy interacting with humans. We decided to only collect the minimum personal data required for a safe and comfortable interaction such as height, weight and BMI (which could be combined with skeleton tracking data) with the individual requiring assistance only being identified by a unique identifier. Data would be stored in a safe storage system such as Databox. Databox is an EPSRC project involving collaborators from Queen Mary University of London, the University of Cambridge and the University of Nottingham and a platform for managing secure access to data. All our data processes were GDPR compliant.

The individual’s response to and relationship with Squishy was also central to the design both in terms of the COBOT’s appearance, feel and touch and the use of slow, comfortable movements which engender relaxation and trust.

Having discussed and honed our design ideas we then had to consolidate them into five slides and a five-minute presentation! We were each involved in different aspects of the brief following which we collectively refined the slides until we had the final version.  Getting across the key elements in five minutes proved to be a challenge; with our first run-through coming in at closer to seven and a half minutes but on the day, we just managed to finish on time. It was interesting to see how many people really struggled with the time challenge, and I am sure my experience at summer school will be useful for when I enter the Three Minute Thesis (3MT®) in 2021…

And the outcome of all this hard work and collaboration? I am delighted to report that The BOTtom Wipers and Squishy won the COBOT challenge. J

References:

Hall, D., Williams, C., 2014. Big Hero 6 (Film). Walt Disney Animation Studios.

Perrault, A. A., Khani, A., Quairiaux, C., Kompotis, K., Franken, P., Muhlethaler, M., Schwartz, S., & Bayer, L. (2019). Whole-Night Continuous Rocking Entrains Spontaneous Neural Oscillations with Benefits for Sleep and Memory. Current Biology, 29(3), 402-411.e403. https://doi.org/https://doi.org/10.1016/j.cub.2018.12.028

Umedachi, T., Shimizu, M., & Kawahara, Y. (2019). Caterpillar-Inspired Crawling Robot Using Both Compression and Bending Deformations. IEEE Robotics and Automation Letters, 4(2), 670-676. https://doi.org/10.1109/LRA.2019.2893438

Acknowledgements:

I’d like to thank Connected Everything and the Smart Products Beacon at the University of Nottingham who organised and ran the Summer School so efficiently, my lead supervisor Alexandra Lang who read my draft copy and is always helpful and inspirational and the Horizon Centre for Doctoral Training at the University of Nottingham (UKRI Grant No. EP/S023305/1) for supporting my PhD.