Reflection on Writing and Presenting a Conference Paper

post by Laurence Cliffe (2017 cohort)

The Audio Mostly 2019 conference provided me with a relevant and convenient platform through which I could present an outline of my PhD research activity to date. Convenient, and also economical, as this year it was hosted by the University of Nottingham’s Department of Music, but also highly relevant, as many papers from this particular conference’s previous proceedings have presented themselves as being important points of reference though my PhD work to date. Having followed particular research projects of specific relevance to my PhD, Audio Mostly not only presents itself as an appropriate platform for the publication of my work, but also as springboard for other publishing possibilities. This is made evident by many projects being initially presented at Audio Mostly, and then having additional work included within them and then being published and presented as journal articles or at other conferences as the projects progress and evolve.

The published paper presented a synopsis of what I considered as the most pertinent points of my research so far. Rather than presenting specific research data from the results of studies, the paper presented the results of my practical lab-based activities in the development of a working technical prototype, and outlined my methodology and approach, and two proposed study environments, the latter being the subject of currently ongoing and future plans for the development of the project.

All of my supervision team had input on the paper, from proofreading to practical advice and providing some written introductory content. Another academic, involved in one of the proposed studies, also provided some written content specifically relating to the introduction of this specific part of the project. I wrote an initial draft and then sent it to the relevant parties with a specific request on how I thought they may be able to contribute and help with its authorship.

One comment from a particular reviewer proved very useful and centered around the use and definition of a specific acronym. This prompted me to investigate the issue further and, as such, has enabled me to focus my research to a much greater extent and to communicate more effectively the subject of my research to others. It has also provided a much clearer definition of the place of my research within its specific sphere of study.

As well as presenting the paper, I also had the opportunity to demonstrate my technical prototype at the conference. Having been scheduled to present my paper before my demo gave me the perfect opportunity to engage with people whilst demonstrating, answering questions, and continuing discussions as a result of my presentation, and also answering some of these questions practically via the technical demonstration. Generally, the feedback was complimentary and demonstrated an interest in my work, especially in relation to its study through practical application.

Authors whose papers were successfully accepted to the conference have since been invited to contribute to a special edition journal on audio interactivity and to build on the papers initially presented. This seems like a logical next step, as I have since completed some of the proposed studies, and therefore can include the findings and conclusions from these studies in the paper, with a view to formulating a journal article, and providing me with an opportunity to publish the subsequent stage of my PhD research.

On reflection, there are two particular challenges that sprint to mind as a result of this publication and presentation process. The first was the practical task of synthesising a 7000-word paper into a 20-minute presentation. What content to include? What content to leave for discussion? How much detail do I need to include on specific points to get the points across? These were all questions I was asking myself. Another challenge was the problem of presenting ‘live’ research. By the time I actually presented, my research had moved on. I’d changed some of the technology within the prototype and another study opportunity had presented itself which I hadn’t included in my future work section. This led to a bit of back peddling during the presentation, but I did have the opportunity to discuss these points with individuals during my demonstration.

Link to my paper.

Originally posted on Laurence’s blog.

Internship as UX/UI designer

post by Serena Midha (2017 cohort)

As someone whose journey so far has been straight through education, from school to BSc to MSc and PhD, exposure to life outside of the education bubble has been fairly limited. So, for the internship, I was keen to work in industry!

With the arrival of the pandemic in my third year, there was a fair amount of concern that the opportunity for an internship was sparse. Around the time when I was starting to mildly panic, there was an advertisement for a virtual internship as a UX/UI Designer. The company was a start-up called Footfalls and Heartbeats and they had developed a technology that meant that knitted yarns could act as physiological sensors. The internship was focussed on one product which was a knee sleeve designed to provide physiological feedback to physiotherapists and athletes during training. The product was still under development, but the prototype looked just like a soft knee brace which weightlifters wear and the data it could measure included the range of motion of the knee and a squat repetition counter; the product had potential to measure velocity but that was an aim for further in the future.

The description seemed tailored to my idea of an ideal internship! It was related to my PhD as my research involves investigating effective ways of conveying brain data to users, and the internship project investigated ways of conveying the physiological data from the knee sleeve to users. The description of the project also suited my interests in sport (and weirdly knees and sewing). I applied and was lucky enough to be accepted. The application process had a few stages. The first stage was the submission of a CV and personal statement. After that, I got asked to do a practical task which involved a UX task of evaluating where I would input a certain aspect of the knee sleeve connection within the app, and a UI task of making high fidelity wireframes on Figma (a design software) based on low fidelity wireframes that were provided. The task had a 5-day deadline and I had no UI experience. To be honest, I had never heard of Figma (or high fidelity wireframes or basically anything to do with UI), so I basically spent all 5 days watching YouTube videos and doing a lot of learning! An interview with a director and data scientist/interface designer followed the practical task and they liked my design (somehow I forgot to tell them that I had only just learned what Figma was)!

There were two of us doing the internship; I was supposed to be designing the desktop app and the other person was to design the mobile and tablet app. We were supervised by the data scientist who interviewed me and he was a talented designer which meant he often took on design roles in the company. He wanted to create an office-like atmosphere even though we were working remotely so the three of us remained on a voice call all day (muted) and piped up when we wanted to discuss anything.

With the product still very much under development and its direction ever-changing, our project changed during every weekly team meeting for the first 4 or 5 weeks. I think this was because the company wasn’t really sure where the product was going and thus they would ask us to do something, like display a certain type of data, only for us to find out the next week that the product couldn’t measure that type of data. The product was supposed to be a business to consumer product and thus we started designing a detailed app fit for end users, but the company’s crowdfunding was unsuccessful so they changed direction to create a business to business product. This meant that our project changed to designing a tablet demo app which showcased what the product could do. They definitely didn’t need two internship people for this project but we made it work!

The most stand-out thing to me about the whole internship was the lack of market research within the team – I don’t think there was any! The product was designed for professional athletes and physiotherapists, yet I really couldn’t see how the two main sources of data it could measure would be useful for either party. I was pretty sure athletes wouldn’t want an app to count their reps when they could do it in their heads and I was pretty sure that physios were happy measuring range of motion with a plastic goniometer (and patients with swollen knees wouldn’t be able to fit on the knee sleeve). I raised these points and the company asked me to speak to my personal physio and his feedback was that he would have no use for the knee sleeve; however, the company decided to carry on with these functions as the main focus of the knee sleeve measurements and I think this was because measuring this data was most achievable in the short term. The whole thing was proper baffling!

However, by the end of the internship we had produced a really nice demo app. I had learned a lot about design in terms of how to design a whole app! We generally started with sketches of designs which then were digitised into low fidelity wireframes and then developed into the high fidelity end version. I also learned about some really helpful tools that designers use such as font identifies and colour pallet finders. We produced a design document which communicated in detail our designs to the engineers who were going to make the app. And I had a very valuable insight into a start-up company which was chaotic yet friendly.

My supervisor on the project was great to work with. He made sure we got the most out of the internship and had fun whilst doing it, and he created a very safe space between the three of us. The company had a very inclusive and supportive atmosphere and they made us feel like part of the team. I think the product has a lot of potential but needs developing further which would mean a later release date. I’m most looking forward to seeing what happens with the knitted technology sensors as they can have many potential applications such as in furniture or shoes.



Build a Better Primary

post by Dr Rachel Jacobs ( 2009 cohort)

My studio in Nottingham – Primary – is running a large crowd funding campaign to support developing the building and keeping the arts resilient in Nottingham post the pandemic. They are offering an opportunity to receive artworks, art books, postcards and more in return for support for their developments.

Primary is a local artist-led contemporary visual arts organisation, based at the old Douglas Road Primary School in Radford, Nottingham. They run a free public programme of events and exhibitions, and provide studio spaces to over 50 resident artists. They are a vital arts space for the city. They have worked regularly with Horizon and the Mixed Reality Lab through my work and other collaborations with researchers.

If you are interested in supporting them and receiving artworks in return look at the website here:

Please also pass this on to anyone you know who loves art and you think might be interested!

Thank you!


Call for Participants – Interaction for Children & Young People

Have you been affected by not seeing your family and friends during Covid-19 restrictions?

Horizon CDT PhD student Mel Wilson (2018 cohort) is looking for participants to help with her research into the effects of Covid-19 on children and young people.

You can find out more details on how to participate here.

For any additional information or queries please feel free to contact Mel at


International Summer School Programme on Artificial Intelligence

post by Edwina Abam (2019 cohort)


The summer school programme I enrolled on during this year’s summer was the 3rd edition of the International Summer School Programme on Artificial Intelligence with the theme Artificial Intelligence from Deep Learning to Data Analytics (AI-DLDA 2020).

The organisers of the program were the University of Udine, Italy, in partnership with Digital Innovation Hub Udine, Italian Association of Computer Vision Pattern Recognition and Machine Learning (CVPL), Artificial Intelligence and Intelligent Systems National Lab, AREA Science Park and District of Digital Technologies ICT regional cluster (DITEDI).

Usually, the AI-DLDA summer school program is held in Udine, Italy, however, following the development of the COVID-19 situation, this year’s edition of the AI-DLDA summer school program was held totally online via an educational platform and it lasted for 5 days starting from Monday 29th June until Friday 3 July 2020. There were about 32 PhD students from over 8 different countries participating in the summer school as well as masters students, researchers from across the world and several industry practitioners from Italian industries and Ministers.

School Structure

The school program was organised and structured into four intensive teaching days with keynote lectures in the morning sessions and practical workshops in the afternoon sessions. The keynote lectures were delivered by 8 different international speakers from top universities and high profile organisations. Both lectures and lab workshop sessions were delivered via a dedicated online classroom.

Key Note Lectures and Workshops

Day 1, Lecture 1: The first keynote lecture delivered was on the theme, Cyber Security and Deep Fake Technology: Current Trends and Perspectives

Deep Fake Technology are multimedia contents which are created or synthetically altered using machine learning generative models. These synthetically derived multimedia contents are popularly termed as ’Deep Fakes’. It was stated that with the current rise in ’Deep Fakes’ and synthetic media content, the historical belief that images, video and audio are reliable records of reality is no longer tenable. The image in figure 1 below shows an example of Deep Fake phenomenon.

The research on Deep Fake technology shows that the deep fake phenomenon is growing rapidly online with the number of fake videos doubling over the past year. It is reported that the increase in deep fakes is sponsored by the growing ubiquity of tools and services that have reduced the barrier and enabled novices to create deep fakes. The machine learning models used in creating or modifying such multimedia content are Generative Adversarial Fusion Networks (GANs). Variants of the techniques include StarGANs and StyleGANs.

Figure 1: Deep Fake Images

The speakers presented their own work which focused on detecting deep fakes by analyzing convolutional traces [5]. In their work they focused on the analysing images of human faces,by trying to detect convolutional traces hidden in those images: a sort of fingerprint left throughout the image generation process. They propose a new Deep fake detection technique based on the Expectation Maximization algorithm. Their method outperformed current methods and proved to be effective in detecting fake images of human faces generated by recent GAN architectures.

This lecture was really insightful for me because I got the opportunity to learn about Generative Adversarial Networks and to understand their architectures and applications in real-world directly from leading researchers.

Day 1, Lecture 2: Petia Radeva from the University of Barcelona gave a lecture on Food Recognition The presentation discussed Uncertainty modeling for food analysis within end-to-end framework. They treated the food recognition problem as a Multi-Task Learning (MTL) problem as identifying foods automatically from different cuisines across the world is challenging due to the problem of uncertainty. The MTL Learning problem is shown in figure 2 below. The presentation introduced aleatoric uncertainty modelling to address the problem of uncertainty and to make the food image recognition model smarter [2].

Figure 2: Food Image Recognition Problem

Day 1, Lecture 3: The final keynote lecture on day 1 focused on Robotics, on the topic: Learning Vision-based, Agile Drone Flight: from Frames to Event Cameras which was delivered by Davide Scaramuzza from University of Zurich.

He presented on several cutting edge research in the field of robotics including Real time, Onboard Computer Vision and Control for Autonomous, Agile Drone Flight [3]. Figure 3 below shows autonomous drone racing from a single flight demonstration.

Figure 3: Autonomous Drone Racing

The presentation also involved an update of their curent research on the open challenges of Computer vision, arguing that the past 60 years of research have been devoted to frame based cameras, which arguably are not good enough. Therefore, proposing event -based cameras as a more efficient and effective alternative as they do not suffer from the problems faced by frame based cameras. [4]

Day 1, Workshop Labs: During the first workshop we had practical introduction to the Pytorch Deep Learning Framework and Google Colab Environment. This was led by Dott. Lorenzo Baraldi from University of Modena and Reggio Emilia.

Day 2, Lecture 1: Prof. Di Stefano, gave a talk on Scene perception and Unlabelled data using Deep Convolutional Neural Networks. His lecture focused on depth estimation by stereo vision and the performance of computer vision models against the bench marked Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) data set. He also discussed novel advancements in the methods used in solving computer vision problems such as Monocular Depth Estimation problem, proposing that this can be solved via Transfer learning. [10].

Day 2, Lecture 2: In addition, Prof. Cavallaro from Queen Mary University of London delivered a lecture on Robust and privacy-preserving multi-modal learning with body cameras.

Lab2 – Part I: Sequence understanding and generation still led by (Dott. Lorenzo Baraldi, University of Modena and Reggio Emilia)

Lab2 – Part II: Focused on Deep reinforcement learning for control (Dott. Matteo Dunnhofer, University of Udine)

Day 3, Lecture 1 keynote lecture focused on Self Supervision Self-supervised Learning: Getting More for Less Out of your CNNs by Prof. Badganov from University of Florence. In his lecture he discussed self-supervised representation learning and self-supervision for niche problems [6].

Day 3 lecture 2 was done by keynote speaker, Prof. Samek from Fraunhofer Heinrich Hertz Institute on a hot topic in the field of Artificial Intelligence on Explainable AI: Methods, Applications and Extensions.

The lecture covered an overview of current AI explanation methods and examples of real-world applications of the methods. We learnt from the lecture that AI explanation methods can be divided into four categories namely perturbation based methods, Function based methods, Surrogate based methods and Structure based methods. We learnt that structure-based methods such as Layer-wise Relevance Propagation (LRP) [1] and Deep Taylor Decomposition [7] are to be preferred over function-based methods as they are computationally fast and do not suffer the problems of the other types of methods. Figure 4 shows details of the layer-wise decomposition technique.

Figure 4: Layer-wise Relevance Propagation

Overall, it was concluded that the decision functions of machine learning algorithms are often complex and analyzing them can be difficult. Nevertheless, levering the model’s structure can simplify the explanation problem. [9].

Lab3 – Part I: Lab 3 covered Going Beyond Convolutional Neural Networks for Computer Vision led by Niki Martinel and Rita Pucci from University of Udine

Lab3 – Part II: Going Beyond Convolutional Neural Networks for Computer Vision (Dott. Niki Martinel and Dott.ssa Rita Pucci, University of Udine)

Day 4: The final keynote lecture was done by Prof. Frontoni on Human Behaviour Analysis. This talk concentrated on the study of human behaviours specifically Deep Understanding of shopper behaviours and interactions using computer vision in the retail environment [8]. The presentation showed experiments conducted using different shopping data sets for tackling different retail problems including user interaction classification, person re-identification, weight estimation and human trajectory prediction using multiple store data sets.

The second part of the morning section on Day 4 was open for PhD students to present their research works to participants on the program.

Lab4 Part I: Machine and Deep Learning for Natural Language Processing(Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)

Lab4 – Part II: Machine and Deep Learning for Natural Language Processing (Dott. Giuseppe Serra and Dott.ssa Beatrice Portelli , University of Udine)

Concluding Remarks

The summer school programme offered us the benefit of interacting directly with world leaders in Artificial Intelligence. The insightful presentations from leading AI experts updated us about the most recent advances in the area of Artificial Intelligence, ranging from deep learning to data analytics right from the comfort of our homes.

The keynote lectures from world leaders provided an in-depth analysis of the state-of-the-art research and covered a large spectrum of current research activities and industrial applications dealing with big data, computer vision, human-computer interaction, robotics, cybersecurity in deep learning and artificial intelligence. Overall, the summer school program was an enlightening and enjoyable learning experience.


  • Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert Mu¨ller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
  • Marc Bolan˜os, Marc Valdivia, and Petia Radeva. Where and what am i eating? image-based food menu recognition. In European Conference on Computer Vision, pages 590–605. Springer, 2018.
  • Davide Falanga, Kevin Kleber, Stefano Mintchev, Dario Floreano, and Davide Scaramuzza. The foldable drone: A morphing quadrotor that can squeeze and fly. IEEE Robotics and Automation Letters, 4(2):209–216, 2018.
  • Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, J¨org Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. arXiv preprint arXiv:1904.08405, 2019.
  • Luca Guarnera, Oliver Giudice, and Sebastiano Battiato. Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 666–667, 2020.
  • Xialei Liu, Joost Van De Weijer, and Andrew D Bagdanov. Exploiting unlabeled data in cnns by self-supervised learning to rank. IEEE transactions on pattern analysis and machine intelligence, 41(8):1862–1878, 2019.
  • Gr´egoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Mu¨ller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211–222, 2017.
  • Marina Paolanti, Rocco Pietrini, Adriano Mancini, Emanuele Frontoni, and Primo Zingaretti. Deep understanding of shopper behaviours and interactions using rgb-d vision. Machine Vision and Applications, 31(7):1–21, 2020.
  • Wojciech Samek, Gr´egoire Montavon, Sebastian Lapuschkin, Christopher J Anders, and Klaus-Robert Mu¨ller. Toward interpretable machine learning: Transparent deep neural networks and beyond. arXiv preprint arXiv:2003.07631, 2020.
  • Alessio Tonioni, Matteo Poggi, Stefano Mattoccia, and Luigi Di Stefano. Unsupervised adaptation for deep stereo. In Proceedings of the IEEE International Conference on Computer Vision, pages 1605–1613, 2017.

Call for Participants: Mental workload in daily life

Fourth-year Horizon CDT PhD student Serena Midha is recruiting participants to take part in a research study.

Serena is researching mental workload from a daily life perspective. Serena and her team are aiming to gather a full 5 days of subjective workload levels, as well as data on what activities were being done to generate these ratings. They also want to further their understanding of people’s personal experiences of mental workload.

Participant requirements:

      • Android Users
      • Office workers outside of academia
      • Without clinical history of anxiety or depression

Participants will be offered £75 for participating in the study.

More information about the study can be found here.

You can contact Serena with any queries.


You can check out Serena’s Research Highlights here:



Attitudes and Experiences with Loan Applications – Participants needed

post by Ana Rita Pena (2019 cohort)

My PhD is investigating how technologies to protect individual’s privacy in automated decision for loan applications impact people. Within the scope of this broad topic, I am interested in personal experiences of loan applications in regard to trust, fairness and decision making.

I am currently recruiting for my next study: Attitudes and experiences with Loan Applications: UK context.  The study is made up of a 45 minute interview (held online) and a follow-up online survey.

This study aims to understand how people feel about loan applications, data sharing in this context, and how well they understand the decision process behind these decisions.

We will be focusing on personal loans in particular. Participants will not have to disclose specific information about the loan they applied for (in regards to monetary value for example) but are invited to reflect on their experiences.

I am looking to recruit people who:
— are over the age of 18
— have applied for a loan in the UK
— are proficient in English
— able to provide consent to their participation

Participation in the study will be compensated with a £15 online shopping voucher.

More information about the interview study can be found here.

If you have any further questions or are interested in participating,  don’t hesitate to contact me at

Thank you!

Ana Rita Pena

You can read more about Rita’s research project here.


Pepita Barnard is a Research Associate at the Horizon Digital Economy Research and has recently submitted her PhD thesis.

post by Pepita Barnard (2014 cohort)

I am excited to be working with Derek McAuley, James Pinchin and Dominic Price from Horizon on a Social Distancing (SoDis) research project. We aim to understand how individuals act when given information indicating concentrations of people, and thus busyness of places.

We are employing a privacy-preserving approach to the project data collected from mobile device WiFi probe signals’ data. With the permission of buildings’ managers and relevant Heads of Schools, the SoDis Counting Study will deploy WISEBoxes in a limited number of designated University buildings, gather the relevant data from the Cisco DNA Spaces platform, which the University has implemented across its Wi-Fi network, and undertake a gold standard human-count.

What are WISEBoxes? There’s a link for that here

Essentially, WISEBoxes are a sensor platform developed as part of a previous Horizon project, WISEParks. These sensors count the number of Wi-Fi probe requests seen in a time-period (typically 5 minutes) from unique devices (as determined by MAC address). MAC addresses, which could be considered personally identifiable information, are only stored in memory on the WISEBox for the duration of the count (i.e. 5 minutes). The counts, along with some other metadata (signal intensities, timestamp, the WiFi frequency being monitored) are transmitted to a central server hosted on a University of Nottingham virtual machine. No personally identifiable information is permanently stored or recoverable.

We will have ‘safe access’ to Cisco DNA Spaces API, meaning MAC addresses and other identifiers will not be provided to the SoDis research team. The data we gather from Cisco DNA Spaces API will be processed to produce information similar to that gathered by the WISEBoxes, i.e. counts of number of unique users connected to an access point in a period of time.

To develop our ‘busyness’ models, we will also deploy human researchers to count people in designated buildings and spaces. This human-counting element will provide a gold standard for said buildings, at the time of counting. This gold standard can then be modelled against data simultaneously produced from WiFi signal counting methods, producing an estimated level of busyness.

With the help of several research assistants, we will collect 40 hours of human-counting data, illustrating building activity over a typical workweek. We expect to start this human-counting work in the School of Computer Science Building mid-January 2021.

This gold standard human-count will include both a door count and an internal building count. For each designated building, we will have researchers posted at the entrances and exits to undertake door counts. The door counters will tally numbers of people going in and numbers going out within 5-minute intervals using + and – signs. On each floor, researchers will count people occupying rooms and other spaces in the building (e.g., offices, labs, atrium, corridors). Each space will be labelled by room number or name on a tally sheet. Researchers will do two rounds of their assigned floor per hour, checking numbers of people occupying the various spaces. Different buildings will require different arrangements of researchers to enable an accurate count. For example, to cover a school building like Computer Science on Jubilee, we will have 6 researchers counting at any one time.

We expect some of the data collected from the WiFi probes and connections to be spurious (noise), however this is not of concern. Why? Well, to represent busyness, we do not need to worry about exact numbers.

It is accepted that the data may not be accurate, for example, someone’s device may use or send a WiFi probe signal to an access point (AP) or WISEBox in the designated building who is not actually in the building. This potential for inaccuracy is a recognised feature of the privacy-preserving approach we are taking to model busyness for the social distancing tool, SoDis. The researchers undertaking the human-counting study may miss the occasional person roaming the building, but this level of error is not of particular concern. When the human-count is triangulated with the sources of WiFi data, a model of busyness for that space will be produced.

The approach we are testing is relevant not only to our current desire to reduce infection from COVID-19 but may also prove useful to support other health and social causes.

Listening to Digital Dust

Laurence Cliffe (2017 cohort) writes about how the design of analogue music equipment influenced the online interactive experiments in the Science and Media Museum‘s  Sonic Futures project.

The practice of attempting to replicate historical analogue music equipment within the digital domain remains a popular and enduring trend. Some notable examples would include electric guitar tone studios, or digital amplifier simulators, virtual synthesisers and effects plugins for digital audio workstations such as Logic, GarageBand or Cubase.

As expected, such examples attempt to replicate the much-loved nuances of their analogue counterparts, whether that be the warmth of vintage valve amplification and magnetic tape saturation, or the unpredictable, imperfect and organic characteristics of hand-assembled and aged electronic circuitry.

Within the Sonic Futures online interactive exhibits we can hear the sonic artefacts of these hardware related characteristics presented to us within the digital domain of the web; the sudden crackle and hiss of a sound postcard beginning to play and the array of fantastic sounds that can be achieved with Nina Richards’ Echo Machine. After all, who would be without that multi-frequency, whooping gurgle sound you can create by rapidly adjusting the echo time during playback?

Within digital music technology, while this appetite for sonic nostalgia is interesting in itself, we can also see how this desire to digitally replicate an ‘authentic experience’ extends to the way in which these devices are visually represented, and how the musician, music producer or listener is directed to interact with them. Again, we see this in the Sonic Futures online interactive exhibits: the Sound Postcard Player with a visual design reminiscent of a 1950s portable record player; the Echo Machine’s visual appearance drawing upon the design of Roland’s seminal tape-based echo machine of the 1970s, the RE-201 or Space Echo; and Photophonic with its use of retro sci-fi inspired fonts and illustrations.

Roland RE-201 ‘Space Echo’ audio effects unit
Screenshot of online interactive echo machine designed by Nina Richards

We can see even more acute examples of this within some of the other examples given earlier, such as virtual synthesisers and virtual guitar amplifiers, where features such as backlit VU meters (a staple of vintage recording studio equipment) along with patches of rust, paint chips, glowing valves, rotatable knobs, flickable switches and pushable buttons are often included and presented to us in a historically convincing way as an interface through which these devices are to be used.

This type of user-interface design is often referred to as skeuomorphism, and is prevalent within lots of digital environments; the trash icon on your computer’s desktop is a good example (and is often accompanied by the sound of a piece of crunched-up paper hitting the side of metallic bin). Skeuomorphism as a design-style tends to go in and out of fashion. You may perhaps notice the look and feel of your smartphone’s calculator changing through the course of various software updates from one that is to a lesser or greater degree skeuomorphic, to one that is more minimalist and graphical and often referred to as being of a flat design.

Of course, it is only fitting that the Sonic Futures virtual online exhibits seek to sonically and visually reflect the historical music technologies and periods with which they are so closely associated. At a point in time when we are all seeking to create authentic or realistic experiences within the digital domain, whether it be a virtual work meeting or a virtual social meetup with friends and relatives, using the visual and sonic cues of our physical realities within the digital domain reassures us and gives our experience a sense of authenticity.

Along with the perceived sonic authority of original hardware, another notable reason why skeuomorphic design has been so persistent within digital music technology can be explained by the interface heavy environment of the more traditional hardware-based music studio (think of the classic image of the music producer sitting behind a large mixing console with a vast array of faders, buttons, switches dials and level meters). When moving from this physical studio environment to a digital one, in order to facilitate learning, it made sense to make this new digital environment a familiar one.

Sound postcards player

Another possible contributing factor is the relative ease with which the digital versions can be put to use within modern music recording and producing environments: costing a fraction of the hardware versions and taking up no physical space, they can be pressed into action within bedroom studios across the globe. Perhaps this increased level of accessibility generates a self-perpetuating reverence for the original piece of hardware, which is inevitably expensive and hard to obtain, and therefore its visual representation within a digital environment serves as a desirable feature, an authenticating nod to an analogue ancestor.

There are, of course, exceptions to the rule. The digital audio workstation Ableton Live (along with some other DAWs and plugins) almost fully embraces a flat design aesthetic. This perhaps begs the question: what role, if any, does the realistic visual rendering of a piece of audio hardware play in its digital counterpart? What does it offer beyond the quality of the audio reproduction? From the perspective of a digital native (someone who has grown up in the digital age) its function as a way to communicate authenticity is thrown further into question and perhaps it is skeuomorphic design’s potential to communicate the history behind the technology that comes into focus.

Visit EchoSound Postcards and Photophonic to try the online experiments for yourself. You can also read more about the Sonic Futures project.

–originally posted on National Science and Media Museum Blog