Fourth-year PhD student Joe Strickland (2017 cohort) presents eight years’ worth of outreach activity.
You can hear all about it here.
Horizon Centre for Doctoral Training Blog
To share our activity, progress and successes
Fourth-year PhD student Joe Strickland (2017 cohort) presents eight years’ worth of outreach activity.
You can hear all about it here.
Article written by Farid Vayani (2020 cohort)
Originally published in the ISACA Journal
Within the last decade or so, cyberincidents have made headlines and have become top strategic risk factors for enterprises. These incidents have not spared even high-profile enterprises and government bodies. Despite significant investments in cyberdefense, these entities are still considered soft targets by attackers. It has become clear that the weakest link in the security chain is the human factor.
Negligence is a key aspect of human fallibility. Employees and contractors fail to heed security training, enterprise policies, and applicable laws and regulations, which may be regarded as mere check-the-box exercises at the time of joining the enterprise. Negligent insiders are responsible for 62 percent of cyberincidents.1
Consequently, cybersecurity management can no longer be treated as something distinct from the business or as merely an IT department issue. Senior leadership must enhance the enterprise’s cybersecurity strategy by ensuring a security risk-aware culture and working with employees, contractors, regulators, peer organizations and third-party suppliers to reduce the risk of cyberincidents. Ownership of cybersecurity risk at the top helps secure the trust and confidence of all stakeholders while setting the appropriate tone.2
The growing number of insider threats, the expanding regulatory requirements to safeguard personal and sensitive data, the complexity of responding to changing attack vectors, and the pressure created by these circumstances demand a shift in cybersecurity management from intraorganizational to cross-organizational. In cross-organizational cybersecurity management, the sharing of threat intelligence is of paramount importance.3 This includes information that can mitigate insider threats, such as background checks to determine credit rating, employment history and criminal convictions. Intraorganizational cybersecurity management, in contrast, caters to a noncollaborative and independent type of security management,4 which leads to a siloed approach and enables insider threats to materialize and expand effortlessly.
The human work motivation and management Theory Y proposes an environment in which leading by example extends respect, dignity and inspiration to employees, encouraging them to become ethical and disciplined in accepting and conforming to the enterprise’s security culture.5 In contrast, Theory X takes a cynical view of human nature and leads to an adversarial relationship between leaders and employees.6 Social learning theory suggests that weak leadership is to blame for an apathetic and uncooperative workforce; thus top management should be held accountable for the security culture, ensuring its acceptance by articulating its core ethical values and principles through verbal expressions and reminders.7
Consider the example of a security audit conducted in a Theory X vs. Theory Y enterprise. In a Theory X enterprise, there is a bureaucratic chain of command. The auditor discovers a problem and reports it to the information security officer. The security officer passes the information on to the department head, who, in turn, informs the team leader of the non-compliance issue. The team leader summons the employee or employees closest to the source of the problem. This creates a confrontational environment because the employees may have been unaware that their activities were being audited.
In a Theory Y enterprise, the auditor collaborates with the relevant employees when setting the objectives of the audit and engages them directly when a problem is discovered, thus enabling them to own and address the problem. The auditor’s report still climbs the official ladder, but by the time it arrives at the top, the employees have already taken the appropriate steps to mitigate the issue. Employees appreciate feedback from the top and recognize that the enterprise is not interested in punishing them. Such an up-front approach creates mutual trust, respect and an improved security culture.
Most enterprise leaders are not experts in cybersecurity management, but such expertise is not required to make effective decisions. Leaders should take the following steps:
Nevertheless, ownership of cybersecurity risk at the top is key to getting the security culture right and fostering the desired security behaviors.
“AN ENTERPRISE’S SECURITY CULTURE DICTATES THE BEHAVIOR OF ITS EMPLOYEES AND THE ENTERPRISE’S SUCCESS IN SUSTAINING AN ADEQUATE SECURITY POSTURE.”
The views expressed in this article are the author’s views and do not represent those of the organization or the professional bodies with which he is associated.
post by Edwina Abam (2019 cohort)
The summer school programme I enrolled on during this year’s summer was the 3rd edition of the International Summer School Programme on Artificial Intelligence with the theme Artificial Intelligence from Deep Learning to Data Analytics (AI-DLDA 2020).
The organisers of the program were the University of Udine, Italy, in partnership with Digital Innovation Hub Udine, Italian Association of Computer Vision Pattern Recognition and Machine Learning (CVPL), Artificial Intelligence and Intelligent Systems National Lab, AREA Science Park and District of Digital Technologies ICT regional cluster (DITEDI).
Usually, the AI-DLDA summer school program is held in Udine, Italy, however, following the development of the COVID-19 situation, this year’s edition of the AI-DLDA summer school program was held totally online via an educational platform and it lasted for 5 days starting from Monday 29th June until Friday 3 July 2020. There were about 32 PhD students from over 8 different countries participating in the summer school as well as masters students, researchers from across the world and several industry practitioners from Italian industries and Ministers.
The school program was organised and structured into four intensive teaching days with keynote lectures in the morning sessions and practical workshops in the afternoon sessions. The keynote lectures were delivered by 8 different international speakers from top universities and high profile organisations. Both lectures and lab workshop sessions were delivered via a dedicated online classroom.
Day 1, Lecture 1: The first keynote lecture delivered was on the theme, Cyber Security and Deep Fake Technology: Current Trends and Perspectives
Deep Fake Technology are multimedia contents which are created or synthetically altered using machine learning generative models. These synthetically derived multimedia contents are popularly termed as ’Deep Fakes’. It was stated that with the current rise in ’Deep Fakes’ and synthetic media content, the historical belief that images, video and audio are reliable records of reality is no longer tenable. The image in figure 1 below shows an example of Deep Fake phenomenon.
The research on Deep Fake technology shows that the deep fake phenomenon is growing rapidly online with the number of fake videos doubling over the past year. It is reported that the increase in deep fakes is sponsored by the growing ubiquity of tools and services that have reduced the barrier and enabled novices to create deep fakes. The machine learning models used in creating or modifying such multimedia content are Generative Adversarial Fusion Networks (GANs). Variants of the techniques include StarGANs and StyleGANs.
The speakers presented their own work which focused on detecting deep fakes by analyzing convolutional traces [5]. In their work they focused on the analysing images of human faces,by trying to detect convolutional traces hidden in those images: a sort of fingerprint left throughout the image generation process. They propose a new Deep fake detection technique based on the Expectation Maximization algorithm. Their method outperformed current methods and proved to be effective in detecting fake images of human faces generated by recent GAN architectures.
This lecture was really insightful for me because I got the opportunity to learn about Generative Adversarial Networks and to understand their architectures and applications in real-world directly from leading researchers.
Day 1, Lecture 2: Petia Radeva from the University of Barcelona gave a lecture on Food Recognition The presentation discussed Uncertainty modeling for food analysis within end-to-end framework. They treated the food recognition problem as a Multi-Task Learning (MTL) problem as identifying foods automatically from different cuisines across the world is challenging due to the problem of uncertainty. The MTL Learning problem is shown in figure 2 below. The presentation introduced aleatoric uncertainty modelling to address the problem of uncertainty and to make the food image recognition model smarter [2].
Day 1, Lecture 3: The final keynote lecture on day 1 focused on Robotics, on the topic: Learning Vision-based, Agile Drone Flight: from Frames to Event Cameras which was delivered by Davide Scaramuzza from University of Zurich.
He presented on several cutting edge research in the field of robotics including Real time, Onboard Computer Vision and Control for Autonomous, Agile Drone Flight [3]. Figure 3 below shows autonomous drone racing from a single flight demonstration.
The presentation also involved an update of their curent research on the open challenges of Computer vision, arguing that the past 60 years of research have been devoted to frame based cameras, which arguably are not good enough. Therefore, proposing event -based cameras as a more efficient and effective alternative as they do not suffer from the problems faced by frame based cameras. [4]
Day 1, Workshop Labs: During the first workshop we had practical introduction to the Pytorch Deep Learning Framework and Google Colab Environment. This was led by Dott. Lorenzo Baraldi from University of Modena and Reggio Emilia.
Day 2, Lecture 1: Prof. Di Stefano, gave a talk on Scene perception and Unlabelled data using Deep Convolutional Neural Networks. His lecture focused on depth estimation by stereo vision and the performance of computer vision models against the bench marked Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) data set. He also discussed novel advancements in the methods used in solving computer vision problems such as Monocular Depth Estimation problem, proposing that this can be solved via Transfer learning. [10].
Day 2, Lecture 2: In addition, Prof. Cavallaro from Queen Mary University of London delivered a lecture on Robust and privacy-preserving multi-modal learning with body cameras.
Lab2 – Part I: Sequence understanding and generation still led by (Dott. Lorenzo Baraldi, University of Modena and Reggio Emilia)
Lab2 – Part II: Focused on Deep reinforcement learning for control (Dott. Matteo Dunnhofer, University of Udine)
Day 3, Lecture 1 keynote lecture focused on Self Supervision Self-supervised Learning: Getting More for Less Out of your CNNs by Prof. Badganov from University of Florence. In his lecture he discussed self-supervised representation learning and self-supervision for niche problems [6].
Day 3 lecture 2 was done by keynote speaker, Prof. Samek from Fraunhofer Heinrich Hertz Institute on a hot topic in the field of Artificial Intelligence on Explainable AI: Methods, Applications and Extensions.
The lecture covered an overview of current AI explanation methods and examples of real-world applications of the methods. We learnt from the lecture that AI explanation methods can be divided into four categories namely perturbation based methods, Function based methods, Surrogate based methods and Structure based methods. We learnt that structure-based methods such as Layer-wise Relevance Propagation (LRP) [1] and Deep Taylor Decomposition [7] are to be preferred over function-based methods as they are computationally fast and do not suffer the problems of the other types of methods. Figure 4 shows details of the layer-wise decomposition technique.
Overall, it was concluded that the decision functions of machine learning algorithms are often complex and analyzing them can be difficult. Nevertheless, levering the model’s structure can simplify the explanation problem. [9].
Lab3 – Part I: Lab 3 covered Going Beyond Convolutional Neural Networks for Computer Vision led by Niki Martinel and Rita Pucci from University of Udine
Lab3 – Part II: Going Beyond Convolutional Neural Networks for Computer Vision (Dott. Niki Martinel and Dott.ssa Rita Pucci, University of Udine)
Day 4: The final keynote lecture was done by Prof. Frontoni on Human Behaviour Analysis. This talk concentrated on the study of human behaviours specifically Deep Understanding of shopper behaviours and interactions using computer vision in the retail environment [8]. The presentation showed experiments conducted using different shopping data sets for tackling different retail problems including user interaction classification, person re-identification, weight estimation and human trajectory prediction using multiple store data sets.
The second part of the morning section on Day 4 was open for PhD students to present their research works to participants on the program.
Lab4 – Part I: Machine and Deep Learning for Natural Language Processing(Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)
Lab4 – Part II: Machine and Deep Learning for Natural Language Processing (Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)
The summer school programme offered us the benefit of interacting directly with world leaders in Artificial Intelligence. The insightful presentations from leading AI experts updated us about the most recent advances in the area of Artificial Intelligence, ranging from deep learning to data analytics right from the comfort of our homes.
The keynote lectures from world leaders provided an in-depth analysis of the state-of-the-art research and covered a large spectrum of current research activities and industrial applications dealing with big data, computer vision, human-computer interaction, robotics, cybersecurity in deep learning and artificial intelligence. Overall, the summer school program was an enlightening and enjoyable learning experience.
Fourth-year Horizon CDT PhD student Serena Midha is recruiting participants to take part in a research study.
Serena is researching mental workload from a daily life perspective. Serena and her team are aiming to gather a full 5 days of subjective workload levels, as well as data on what activities were being done to generate these ratings. They also want to further their understanding of people’s personal experiences of mental workload.
Participant requirements:
Participants will be offered £75 for participating in the study.
More information about the study can be found here.
You can contact Serena with any queries.
You can check out Serena’s Research Highlights here: https://highlights.cdt.horizon.ac.uk/students/psxckta
post by Ana Rita Pena (2019 cohort)
My PhD is investigating how technologies to protect individual’s privacy in automated decision for loan applications impact people. Within the scope of this broad topic, I am interested in personal experiences of loan applications in regard to trust, fairness and decision making.
I am currently recruiting for my next study: Attitudes and experiences with Loan Applications: UK context. The study is made up of a 45 minute interview (held online) and a follow-up online survey.
This study aims to understand how people feel about loan applications, data sharing in this context, and how well they understand the decision process behind these decisions.
We will be focusing on personal loans in particular. Participants will not have to disclose specific information about the loan they applied for (in regards to monetary value for example) but are invited to reflect on their experiences.
I am looking to recruit people who:
— are over the age of 18
— have applied for a loan in the UK
— are proficient in English
— able to provide consent to their participation
Participation in the study will be compensated with a £15 online shopping voucher.
More information about the interview study can be found here.
If you have any further questions or are interested in participating, don’t hesitate to contact me at ana.pena@nottingham.ac.uk
Thank you!
Ana Rita Pena
You can read more about Rita’s research project here.
post by Pepita Barnard (2014 cohort)
I am excited to be working with Derek McAuley, James Pinchin and Dominic Price from Horizon on a Social Distancing (SoDis) research project. We aim to understand how individuals act when given information indicating concentrations of people, and thus busyness of places.
We are employing a privacy-preserving approach to the project data collected from mobile device WiFi probe signals’ data. With the permission of buildings’ managers and relevant Heads of Schools, the SoDis Counting Study will deploy WISEBoxes in a limited number of designated University buildings, gather the relevant data from the Cisco DNA Spaces platform, which the University has implemented across its Wi-Fi network, and undertake a gold standard human-count.
What are WISEBoxes? There’s a link for that here
Essentially, WISEBoxes are a sensor platform developed as part of a previous Horizon project, WISEParks. These sensors count the number of Wi-Fi probe requests seen in a time-period (typically 5 minutes) from unique devices (as determined by MAC address). MAC addresses, which could be considered personally identifiable information, are only stored in memory on the WISEBox for the duration of the count (i.e. 5 minutes). The counts, along with some other metadata (signal intensities, timestamp, the WiFi frequency being monitored) are transmitted to a central server hosted on a University of Nottingham virtual machine. No personally identifiable information is permanently stored or recoverable.
We will have ‘safe access’ to Cisco DNA Spaces API, meaning MAC addresses and other identifiers will not be provided to the SoDis research team. The data we gather from Cisco DNA Spaces API will be processed to produce information similar to that gathered by the WISEBoxes, i.e. counts of number of unique users connected to an access point in a period of time.
To develop our ‘busyness’ models, we will also deploy human researchers to count people in designated buildings and spaces. This human-counting element will provide a gold standard for said buildings, at the time of counting. This gold standard can then be modelled against data simultaneously produced from WiFi signal counting methods, producing an estimated level of busyness.
With the help of several research assistants, we will collect 40 hours of human-counting data, illustrating building activity over a typical workweek. We expect to start this human-counting work in the School of Computer Science Building mid-January 2021.
This gold standard human-count will include both a door count and an internal building count. For each designated building, we will have researchers posted at the entrances and exits to undertake door counts. The door counters will tally numbers of people going in and numbers going out within 5-minute intervals using + and – signs. On each floor, researchers will count people occupying rooms and other spaces in the building (e.g., offices, labs, atrium, corridors). Each space will be labelled by room number or name on a tally sheet. Researchers will do two rounds of their assigned floor per hour, checking numbers of people occupying the various spaces. Different buildings will require different arrangements of researchers to enable an accurate count. For example, to cover a school building like Computer Science on Jubilee, we will have 6 researchers counting at any one time.
We expect some of the data collected from the WiFi probes and connections to be spurious (noise), however this is not of concern. Why? Well, to represent busyness, we do not need to worry about exact numbers.
It is accepted that the data may not be accurate, for example, someone’s device may use or send a WiFi probe signal to an access point (AP) or WISEBox in the designated building who is not actually in the building. This potential for inaccuracy is a recognised feature of the privacy-preserving approach we are taking to model busyness for the social distancing tool, SoDis. The researchers undertaking the human-counting study may miss the occasional person roaming the building, but this level of error is not of particular concern. When the human-count is triangulated with the sources of WiFi data, a model of busyness for that space will be produced.
The approach we are testing is relevant not only to our current desire to reduce infection from COVID-19 but may also prove useful to support other health and social causes.
The practice of attempting to replicate historical analogue music equipment within the digital domain remains a popular and enduring trend. Some notable examples would include electric guitar tone studios, or digital amplifier simulators, virtual synthesisers and effects plugins for digital audio workstations such as Logic, GarageBand or Cubase.
As expected, such examples attempt to replicate the much-loved nuances of their analogue counterparts, whether that be the warmth of vintage valve amplification and magnetic tape saturation, or the unpredictable, imperfect and organic characteristics of hand-assembled and aged electronic circuitry.
Within the Sonic Futures online interactive exhibits we can hear the sonic artefacts of these hardware related characteristics presented to us within the digital domain of the web; the sudden crackle and hiss of a sound postcard beginning to play and the array of fantastic sounds that can be achieved with Nina Richards’ Echo Machine. After all, who would be without that multi-frequency, whooping gurgle sound you can create by rapidly adjusting the echo time during playback?
Within digital music technology, while this appetite for sonic nostalgia is interesting in itself, we can also see how this desire to digitally replicate an ‘authentic experience’ extends to the way in which these devices are visually represented, and how the musician, music producer or listener is directed to interact with them. Again, we see this in the Sonic Futures online interactive exhibits: the Sound Postcard Player with a visual design reminiscent of a 1950s portable record player; the Echo Machine’s visual appearance drawing upon the design of Roland’s seminal tape-based echo machine of the 1970s, the RE-201 or Space Echo; and Photophonic with its use of retro sci-fi inspired fonts and illustrations.
We can see even more acute examples of this within some of the other examples given earlier, such as virtual synthesisers and virtual guitar amplifiers, where features such as backlit VU meters (a staple of vintage recording studio equipment) along with patches of rust, paint chips, glowing valves, rotatable knobs, flickable switches and pushable buttons are often included and presented to us in a historically convincing way as an interface through which these devices are to be used.
This type of user-interface design is often referred to as skeuomorphism, and is prevalent within lots of digital environments; the trash icon on your computer’s desktop is a good example (and is often accompanied by the sound of a piece of crunched-up paper hitting the side of metallic bin). Skeuomorphism as a design-style tends to go in and out of fashion. You may perhaps notice the look and feel of your smartphone’s calculator changing through the course of various software updates from one that is to a lesser or greater degree skeuomorphic, to one that is more minimalist and graphical and often referred to as being of a flat design.
Of course, it is only fitting that the Sonic Futures virtual online exhibits seek to sonically and visually reflect the historical music technologies and periods with which they are so closely associated. At a point in time when we are all seeking to create authentic or realistic experiences within the digital domain, whether it be a virtual work meeting or a virtual social meetup with friends and relatives, using the visual and sonic cues of our physical realities within the digital domain reassures us and gives our experience a sense of authenticity.
Along with the perceived sonic authority of original hardware, another notable reason why skeuomorphic design has been so persistent within digital music technology can be explained by the interface heavy environment of the more traditional hardware-based music studio (think of the classic image of the music producer sitting behind a large mixing console with a vast array of faders, buttons, switches dials and level meters). When moving from this physical studio environment to a digital one, in order to facilitate learning, it made sense to make this new digital environment a familiar one.
Another possible contributing factor is the relative ease with which the digital versions can be put to use within modern music recording and producing environments: costing a fraction of the hardware versions and taking up no physical space, they can be pressed into action within bedroom studios across the globe. Perhaps this increased level of accessibility generates a self-perpetuating reverence for the original piece of hardware, which is inevitably expensive and hard to obtain, and therefore its visual representation within a digital environment serves as a desirable feature, an authenticating nod to an analogue ancestor.
There are, of course, exceptions to the rule. The digital audio workstation Ableton Live (along with some other DAWs and plugins) almost fully embraces a flat design aesthetic. This perhaps begs the question: what role, if any, does the realistic visual rendering of a piece of audio hardware play in its digital counterpart? What does it offer beyond the quality of the audio reproduction? From the perspective of a digital native (someone who has grown up in the digital age) its function as a way to communicate authenticity is thrown further into question and perhaps it is skeuomorphic design’s potential to communicate the history behind the technology that comes into focus.
Visit Echo, Sound Postcards and Photophonic to try the online experiments for yourself. You can also read more about the Sonic Futures project.
–originally posted on National Science and Media Museum Blog
As part of research into the use of personal data in improving the rail passenger experience, I am currently inviting individuals travelling on the train this month (December) to trial a proposed travel companion app aimed at helping rail travellers prepare for how they spend their time during journeys.
The app includes features such as travel information and reminders, as well as records of previous trip experiences.
Participants will be required to use the app for their upcoming trip, after which they will have to complete a short questionnaire to share their thoughts about the app.
The study is open to anyone above the age of 18 years with some experience of rail travel in the UK. In addition, you will need to be travelling before the 20th of December, 2020.
More information can be found here .
Feel free to contact Christian with any queries: christian.tamakloe@nottingham.ac.uk
Hello. I am 3rd year Horizon CDT PhD student partnered with the Dstl. My PhD project is about the detection of deep learning generated aerial images, with the final goal of improving current detection models.
I am looking for participants from all backgrounds, as well as those who have specific experience in dealing with either Earth Observation Data (Satellite aerial images) or GAN-generated images.
Purpose: To assess the difficulty in the task of distinguishing GAN generated fake images from real satellite photos of rural and urban environments. This is part of a larger PhD project looking at the generation and detection of fake earth observation data.
Who can participate? This is open to anyone who would like to take part, although the involvement of people with experience dealing with related image data (e.g. satellite images, GAN images) is of particular interest.
Commitment: The study should take between 5-15 minutes to complete and is hosted online on pavlovia.org
How to participate? Read through this Information sheet and follow the link to the study at the end.
Feel free to contact me with any queries. Matthew.Yates1@nottingham.ac.uk
Continue reading “Call for Participants: Detecting fake aerial images”
post by Cecily Pepper (2019 cohort)
My first summer school started with an invite via email. Despite my interest in the topic, my first thought was that robotics was not my area of expertise (coming from a social science background), so maybe I shouldn’t bother applying as I’d be out-of-my-depth. Although after some consideration, I thought it would create some great opportunities to meet new people from diverse backgrounds. So, I stopped worrying about my lack of knowledge in the area and just went for it; and I got a place!
The summer school was held digitally due to COVID-19 restrictions, which had both its benefits and pitfalls. On the first day, we were welcomed by Debra Fearnshaw and Professor Steve Benford, and were then given the opportunity to introduce ourselves. From this it was apparent that there was a wide variety of delegates from several universities, with a range of disciplines including social sciences, robotics, engineering and manufacturing. The first day mostly consisted of talks from experts about the challenges we face in connecting technology and the potential of co-robotics within the fields of agrirobotics, home and healthcare. The main task of the summer school was to create a cobot (collaborative robot) that could overcome some of the issues that COVID-19 has created or exacerbated. The issue that the group chose to address had to fall into one of the categories introduced on the first day: food production (agrirobotics), healthcare or home. Along with this challenge, more details were needed on function, technological components, and four key areas of the cobot design: ethics, communication, learning and safety. These aspects were introduced on the second day. After being split into groups at the end of the first day, I felt happy as my group had a range of experience and expertise between us, which I felt would bode well for the challenge as well as being beneficial for myself as I could learn something from everyone.
Similarly, the second day consisted mostly of talks, this time based on the four themes mentioned previously. The ethics discussion was interesting and included in-depth explanations around aspects to consider when reflecting upon the ethical consequences of our designs, such as privacy, law, security and personal ethics. An online activity followed the ethics talk but was soon interrupted by a technical glitch. Despite this, we were able to engage with alternative resources provided in order to reflect upon the ethics of our cobot design. This was useful both for our eventual design, as well as applying this to our own PhD research.
The other themes then followed, including a discussion around interaction and communication in technology. This was an insightful introduction to voice user interfaces and alike, and what the current research is focusing on in this field. While fascinating on its own, it was also useful in thinking about how to apply this to our cobot design, and which features may be useful or necessary for our cobot’s functionality. A talk on the third theme of learning was then delivered, including details about facial recognition and machine learning, and the applications of these in the field of robotics. Likewise, this was useful in reflecting upon how these features may be applicable in our design. Finally, the theme of safety was considered. This talk provided us with the knowledge and ability to consider safety aspects of our cobot, which was particularly apt when considering COVID safety implications too. Overall, the first two days were quite lengthy in terms of screen time (despite some breaks), and I found myself wilting slightly towards the end. However, I think we could all understand and sympathise in the difficulty of minimising screen time when there is a short space of time to complete all of the summer school activities.
On the final day, we split into our teams to create our cobot. This day was personally my favourite part of the summer school, as it was fantastic to work with such a variety of people who all brought different skills to the group. Together, we developed a cobot design and went through the themes from the previous day, ensuring we met the design brief and covered all bases. Probably the biggest challenge was keeping it simple, as we had so many ideas between us. Despite our abundance of ideas, we were strict with ourselves as a group to focus and keep the design simplistic. Additionally, the five-minute presentation time meant that we had to keep our design simple yet effective. We then presented our home assistant cobot, Squishy. Squishy was an inflatable, soft cobot designed to assist carers in lifting patients who were bed-bound (as occupational injuries are a significant problem within the care industry). Squishy’s soft design enabled comfort for the patient being lifted, while the modular design provided a cost-effective solution and the possibility of added-extras if necessary. Along with this, Squishy was beneficial in that it consisted of wipe-clean surfaces to enable effective cleaning in light of COVID-19, as well as aiding social distancing by reducing the need for carer-patient contact. Other features of Squishy included machine-learned skeletal tracking and thermal cameras to aid safe functionality, and minimal personal data collection to maintain ethical standards. After the presentations and following questions, the judges deliberated. Results were in…my team were the winners! While I was happy to have won with my team, the most fruitful part of the experience for me was meeting and learning from others who had different backgrounds, perceptions and ideas.
Overall, I felt the summer school was well-organised and a fantastic opportunity to work with new people from diverse backgrounds, and I was very glad to be a part of it. I’m also pleased I overcame the ‘Imposter Syndrome’ feeling of not believing I would know enough or have enough experience to be a valuable delegate in the summer school. So, my advice to all students would be: don’t underestimate what you can contribute, don’t overthink it, and just go for it; you might end up winning!