Second EPSRC Impact award for Richard!

Richard Ramchurn (2015 cohort) won the EPSRC “Telling Tales of Engagement” 2018 Award. This is the second year running that Richard has secured this award which will give up to £10,000 of funding to maximise the reach and impact of his research. The “Telling Tales of Engagement” 2017 Award funded Richard’s mobile touring cinema which travelled the UK in 2018 and 2019.


Post by Richard Ramchurn (2015 Cohort)

The MOMENT is a brain controlled film which has been touring since 2018. It has been primarily screened in a converted cinema caravan which has allowed it to travel outside of the cinema circuit. 

The research project adopts a performance-led research in the wild methodology, through which impact though public performance is an inherent factor – the default practice behind this methodology is that real-world artefacts are professionally made, and performed for the public, as the mode of studying their design and implications.

The MOMENT has had over 300 public screenings across the world, at FACT, Lakeside Arts, Sheffield DOC/FEST, Ars Electronica, Kendal Calling, Blue Dot, Arts By The Sea, Leeds, Geneva and Reykjavik International Film Festivals, and Aesthetica. We were also invited to  international events: SPARK British Council event in Hong Kong, Brain Film Festival in Barcelona; Riverside Film Festival in Padua. Substantial work has been made to make the touring of the caravan self-sustainable and there are plans to further exhibit throughout 2019.

I have been invited to present my research and engage in panel discussions with the film and computer game industry at: Creative England’s Proconnect conference; Continue Conference; Picturehouse London with B3 Media; Broadway Cinema Nottingham; Geneva and Reykjavik International Film Festival; Sheffield DOC/FEST and FACT Liverpool. Organisations and individuals have since contacted me to ask to work on upcoming projects, to collaborate, and screen the film at their festivals.

The project has now secured funding to screen Live Score performances in 2020. In these performances musicians Hallvarður Ásgeirsson and Scrubber Fox perform the score to The MOMENT as the film is created live from the brain data of an audience member.  The performances are followed by a Q and A with the musicians and myself. We previewed the Live Score in Reykjavík and Nottingham last year to engaged audiences. We are now planning a UK tour for the summer of 2020.

A live score accompaniment for an interactive film is an unique, timely, and relevant proposition, capable of capturing both the public’s imagination and commercial interest. Our tour offers an alternative engagement proposition: creative, interactive live performances that large audiences can experience collectively at local arts venues. This model fits with the  industries move towards marketing cinema as a live experience, both through streaming theatre and music performances to screening venues (e.g. NT Live), and by creating immersive environments in which screenings take place (e.g. Secret Cinema).

The Live Score has the potential to reach larger audiences, including both a wider film industry audience, and members of the public who may not usually engage in academic research. The performances are set to be an exciting and dynamic way to share my research.

The Unbanked and Poverty: Predicting area-level socio-economic vulnerability from M-Money transactions

posted by Gregor Engelmann (2014 cohort)

Emerging economies around the world are often characterised by governments and institutions struggling to keep key demographic data streams up to date. The combination of mass call detail records (CDR) data with machine learning has recently been proposed as a way to obtain this data without the expense required by traditional census and household survey methods. The paper is based on the exploratory analysis of CDR and mobile payment (M-Money) data for Dar Es Salaam, Tanzania, carried out as part of the PhD Research. It forms the basis for a chapter on the potential of mobile phone generated data to supplement traditional surveying methods for socio-economic analysis in urban and peri-urban areas.

The paper was written by N/LAB PhD student Gregor Engelmann in collaboration with the PhD supervisor James Goulding and the N/LAB Data Science Lead Gavin Smith. Moving from an initial analysis of the M-Money data and overcoming both technical and general paper writing hurdles to submitting and ultimately presenting the paper was a prolonged process that took nearly a year.

The major technical hurdle was adding a geographical component to, and cleaning the M-Money data. The dataset is comprised of anonymised logs of mobile payment transactions generated through mobile financial services usage of a Tanzanian Mobile Network Operator. While CDR data has been used in a wide number of areas from epidemiology to mobility and urban analysis with more than 900 papers using CDR data published in the last decade alone, the research on M-Money data is very scarce due to the high difficulty of obtaining such data sets from Mobile Network providers. Organisational hurdles included identifying an appropriate journal or conference venue, making sure that both abstract and paper are ready by the relevant deadlines, and raising enough funding for conference travel. The submission to the initially identified conference was ultimately delayed as the paper required more work before submission to IEEE Big Data in early summer 2018. Being based in the same lab made collaborating on the paper easier as we could have regular in-person meetings to review the paper. In addition to in-person meetings, we extensively used Overleaf, an online LaTeX platform, which allows for both collaborative writing and editing. LaTeX has the advantage of being able to handle equations, figures, indices etc. better than traditional writing software such as Word. LaTex (and by extension BibTex) also offers more effective bibliography and reference management while allowing for changes in formatting and style with a few simple lines of code.

Gregor Engelmann presenting at the IEEE Big Data 2018 conference in Seattle, WA

The reviewers’ response to the paper was positive and it was ultimately accepted as one of 98 regular papers (acceptance rate 18.9%) without the need for major changes. In addition to funding from the Business School and CDT, the paper was successful in securing one of the limited Student Travel Awards offered by the conference organisers to cover travel and registration costs to Seattle. The paper was presented to a mixture of academic and practitioner audiences as part of the Big Data Applications: Society track at the IEEE Big Data 2018 conference in Seattle, WS in December 2018.


N/LAB PhD student Gregor Engelmann’s paper: The Unbanked and Poverty: Predicting area-level socio-economic vulnerability from M-Money transactions, accepted and presented as a regular paper into the IEEE Big Data conference 2018 is available online via the publisher (paywall) https://ieeexplore.ieee.org/abstract/document/8622268/figures#figures and Nottingham University eprints http://eprints.nottingham.ac.uk/55720/

IEEE FG 2018 paper & conference experience

Post by Siyang Song (2016 Cohort)

Venue: FG 2018, Xi’an, China https://fg2018.cse.sc.edu/

Paper Title: Human Behaviour-based Automatic Depression Analysis using Hand-crafted Statistics and Deep Learned Spectral Features

The work presented in this paper is the extension of my CDT PLP module. During the PLP, we found that current clinical standards for depression assessment are subjective, and requires extensive participation from experienced psychologists. Therefore, my supervisor (Dr. Michel Valstar) and I explored a video-based automatic depression analysis approach, and find our approach beat state of the art systems in a dataset. As a result, we decided to write a paper about this approach.

This bulk of the paper was written by me. Since I didn’t have much experience about writing paper at that point, my supervisor helped and taught me about how to organize a paper, as well as how to implement ablation studies to show the strengths of the proposed approach. In addition, my external partner (Prof. Linlin Shen) also helped me to check the typos and languages. After, several rounds improvement, we eventually submitted the paper to FG conference which focus on the automatic face and gesture analysis.

Three months after the submission, we received the reviewers comments, where two of them gave ‘weak accept’ and two of them gave ‘borderline’. In the rebuttal, we addressed issues one by one. For some real drawbacks, we made some changes. Also, some comments were gave due to the misunderstanding of the paper. To deal with them, we detailed explained in the rebuttal.

Posters:

This study is the basis of my PhD study. Based on this work, while we continuously developing a better model for practical use, we also looking to other temporal modeling methods.

Building several models for not only depression analysis, but also personality traits and anxiety analysis.

https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8373825

–originally posted on Siyang’s blog

VRtefacts Outreach at Derby Museum & Art Gallery

Post by Joseph Hubbard-Bailey (2016 Cohort)

The VRtefVRacts project provides museum and gallery visitors with the opportunity to hold and explore exhibit objects which they would otherwise just look at behind a literal or figurative red rope. Throughout the day, visitors from around the museum were invited to come and put on a VR headset, interact with some 3D-printed VR-augmented models of artefacts, and share their own story or commentary about the objects as they handled them. They then moved into another room for a short interview about the experience, allowing for the next participant to get started with the VR. While previous outreach events I’ve done have felt engaging and productive, none have been as interactive as this VRtefacts trial; others mostly involved having conversations across tables, and the distance and dynamic between researcher and participant felt similar to a campus-based study scenario. Due to the nature of this event, with participants engaging physically and narratively during the session, members of the public seemed much more a part of what was going on, as opposed to passive spectators.

For the visitors who chose to participate in the VRtefacts project, the experience served as both a novel sort of ‘exhibit’ in itself and a novel way to access preexisting materials in the museum’s collection. The latter seemed of particular value in the case of visitors who lived locally and so visited the museum often, offering an unexpected new level of access to familiar objects. The opportunity to contribute or “donate” a story as part of the VRtefacts experience may also have been particularly appealing to those who visit regularly and were keen to ‘give back’ to the museum. Several visitors did fall into this category of ‘regulars’, but there were also plenty of people who were passing through and popped in to pass the time. Visitors across both of these groups commented about how the decision to work with VRtefacts reflected well on Derby Museum, showing its openness to new ideas and resistance to stagnate. For those who were visiting the museum in groups, engaging with the VRtefacts exhibit seemed to provide a great source of interest and conversation, as they emerged and compared experiences. The fact that the corresponding artefacts themselves were available in the museum’s collection also meant that there was a comfortable transition back into the rest of the exhibit, as people could go and find the ‘real thing’ they had just encountered virtually.

Before I left the museum for the day, I sat down on the duct-taped-still chair and had my hairdo sabotaged by the VR headset so that I could have a go at the VRtefacts experience myself. I chose and inspected a small intricate model of a giant jet engine, turning it over and fumbling around the prickly detail of the gaskets while I tried to think of something clever to say for the camera. It reminded me of a frighteningly massive aircraft housed at the RAF Museum in Hendon, where I’d been for relentless school trips as a child due to its proximity to school grounds. I remember cowering through the awful hangar where the scary plane’s wings were so expansive that you had no option but to walk underneath them if you wanted to get out. While this wasn’t a pleasant experience, I think the physicality of being below the Vulcan — which I now know was not just a war plane, but a strategic nuclear bomber — came to mind during VRtefacts because it was a similar example of the power of perspective.

Image credit: Kenneth Griffiths (Ascension Island, 1982)

When an object is in a glass case or on a screen or behind a rope, I think we often instinctively revert to what I can only describe as a ‘flat’ perspective on it. We might press our noses to the glass as children to try and get a closer look, but the glass fogs up and we get told off, so eventually our curiosity wanes and we take a respectful cursory look instead. What this tired perspective gives us is often limited to two-dimensional factual information about the object of interest, without the weight and contour  and color of the object’s life. I’m very glad I decided to have a go with the VRtefacts pilot myself before I left the event, because it made me aware of how cowering under the expanse of the Vulcan’s wings taught me more about the gravity of war than any of my history lessons had. There is a narrative power in an artefact’s physicality which cannot be accessed by simply looking at it — the VRtefacts project has the potential to provide that physicality in a way that protects the original object, which needn’t even be on the same continent as it’s VR counterpart.

Beyond the benefit this technology could offer in enhancing the habitual gallery-goer’s usual experience, there is also potential benefit to those who aren’t so familiar and comfortable with these venues. Having come from a family who didn’t really go to museums or galleries, I still feel quite awkward and out of place in these spaces at times. I don’t think it’s much of a leap to suggest that projects like VRtefacts — which offer more diverse ways of accessing meaning in historical and art objects — have the potential to make galleries and museums not only more engaging for visitors, but more accessible to a diverse range of visitors.

Thanks to Jocelyn Spence and the rest of the VRtefacts team for letting me join in for the day!

VRtefacts is a pilot project developed within the European Union’s Horizon 2020 research and innovation programme under grant agreement No 727040, GIFT: Meaningful Personalization of Hybrid Virtual Museum Experiences Through Gifting and Appropriation.

–originally posted on Joe’s blog

Programming in Unity at the DEN Summer School

Post by Joe Strickland (2017 Cohort)

Back in the summer of 2018 I attended the DEN summer school in Bournemouth. One of the big draws of the summer school for me was the programming in Unity course that was being offered. Having come from a psychology background, I had no programming knowledge but it was becoming clearer and clearer that this was going to be something that held me back during my PhD, especially when it came to prototyping ideas for experiences. The course itself was pretty good, we ran through several different elements of using Unity including the basics of building scenes, game object physics, and exporting our scene onto a smartphone and viewing it with a cardboard header as a VR experience. We also started using Vuforia and making basic AR content. This workshop gave me a good basic understanding of Unity, but more importantly, it showed that what I wanted to learn and eventually make was well within my grasp. This was very important for motivating me to carry on learning how to build Unity experiences, as well as code in general.

Once the summer was over, my supervisors and I sat down and started discussing short-term goals to get me learning everything I’d need to learn in order to build interactive AR experiences myself. The first of these goals was to learn Python and C# in order to understand the logic of coding and be able to write my own Unity scripts to control different elements of the software. My supervisor ran me through all the basics in Unity that I might need for the specific things I was going to make, a welcome refresher after the summer school course, and I was sent off to learn my languages. Personally, I found Python quite easy to learn. The logic of the language made sense to me and the online resource I had been recommended taught it in a very hands on a practical way, with many small assignments to try out new coding knowledge and to keep old knowledge fresh and reinforced in your memory. Also, the course was broken up into bite size chunks and I found doing a lesson a day over the course of a month a very productive way of learning this language.

C# scripting was a little harder for me to grasp. I don’t know whether it was the difference between it and Python throwing me off or knowing that having to learn this was going to be more important for my PhD, but it took a lot more to try and figure out what I was doing with it. Learning this was done through some of the Unity provided tutorials, as well as other user generated tutorials on YouTube. I was also learning how to use Unity to specifically make the first short term goal project I had been assigned; making videos plays in Unity. The Unity video player isn’t completely user friendly and it took a lot of trial and error and searching Unity message boards and community sites to find out how to get it to work in the way I wanted it to. Having got it to work I moved on to controlling it a bit more and building an experience where the audience can press keys to trigger the playing of different video clips. I crafted a game object for each video clip we had and had them generated and destroyed whenever we needed that video playing, depending on the input of the audience. What I ended up with was a functional interactive film about a man trying to find his heart medication, where the audience could decide whether he moved left, right, or had a heart attack at various points in the film. When I showed my supervisors they liked it but found how I had made the film incredibly inefficient, so they tasked me with remaking it so that different videos played on the same game object and not on different ones. This next step proved challenging but eventually I managed to write a functioning Unity script which changed the state of the game object and, once a game object was of a certain state, it would play different videos with different audience responses. It would then change its stage again to allow the experience to progress. This experience pleased my supervisors, but they didn’t like how making decisions at the wrong times messed the game up, so I had to add delays into the script that stopped audiences making decisions at the wrong points in the experience. Fortunately, this wasn’t to difficult to do, although trying to use time as a function while coding with the video player in mind did prove confusing.

I was also asked to build a restaurant scene and fill it with moving virtual characters, but this was very similar to the summer school exercises and the Unity developer tutorials so this didn’t prove too tricky. Characters were downloaded from Adobe Maximo, so came with animation cycles attached and a few YouTube tutorials later I had people looking around and being furious at virtual restaurant tables.

Finally, I was asked to build an AR tester experience. I had to place a virtual character, like those from the restaurant scene, into a real world environment and have them occluded by a real world object, specifically sitting behind and hidden by a real world table. This is something that is surprisingly hard to find official Unity information for. There is lots of help for tracking markers and placing AR content in the real world but not so much for having that content blocked by real objects. I eventually found a YouTube tutorial which addressed a similar problem in a way which allowed me to figure out how to solve my own. They showed how there was a depth occluder material that you could use to create invisible game objects that would block the audience’s vision of the virtual content. Creating a cube the size of a table top and placing it over the lap of my sitting virtual character, then using a placemat as a tracking marker in the real world to position the avatar behind a real table allowed the virtual character to appear as if they were sitting in the real world. The illusion was particularly impressive when the character moved and there arms would disappear and reappear below and above the line of the back of the table. See attached photo for a snapshot of the experience.

If I had any advice to any other researchers looking to get into creating XR experiences, or even just learning to code, it is there’s no time like the present to start learning. There are plenty of great resources online for free that go through everything you’ll need to know step by step, while also allowing you to navigate through lessons to learn the specific things that you need for whatever project you might be working on. Though getting an understanding of the basics is fundamental you can pick or choose what of the more specific stuff to learn to suit your needs fairly easily. Also, just like any skill, you’ll need to keep practising. Find some little challenges to work towards, like I had set out for me. There were a few times I’d not focus on coding for a few weeks and then notice that I had forgotten something I definitely knew before and had to go back over previous lessons or code that I had written to find it. Don’t fall for this like I did, keep it up at a steady pace and you’ll be writing code in no time.

STEM activity at Loughborough Grammar School

Post by Melanie Wilson (2018 Cohort)

We visited the Loughborough Schools STEM activity which was taking place at Loughborough Grammar School which included pupils from several Loughborough schools. STEM stands for Science, Technology, Engineering and Maths and the workshops encourage pupils to design and explore a project based on these criteria. We gave a presentation on the pathway we took when designing, prototyping and manufacturing the Endeavour LED sabre.  We then addressed the need to consider what activities the final product would be used for including any limitations or challenges which might need to be addressed. Finally we talked with the pupils individually and invited them to tell us about their projects and explored options that might be worth thinking about in their design stages and beyond.

More about Mel’s work and activities can be found here.

Lightsabre Mindfulness Fighting

Post by Melanie Wilson (2018 Cohort)

BBC East Midlands visited one of our LED sabre sessions in Sileby. Below is the report they produced. We were very pleased with this article as it gave a good account of the classes and of our manufacture of the ISM Endeavour LED sabre. Despite a few Star Wars references added by the BBC it was accurate and highlighted a number of our values!
Following the visit we were contacted by many interested people including an international media company. As a result the story appeared in a number of  publications. The articles tended to relate more to the Star Wars films and their associated lightsabers. Both StarWars and lightsabres are trademarks of Disney and Lucas films and Intergalactic Sabre Masters Ltd has no affiliation with these trademarks nor their owning organisations. We did not mention these trademarks during any of the interviews and were surprised to see some of the so-called “quotes”! However, there has been a lot of interest as the result of this exposure which has pleased us and there have been several new starters at classes as a result, particularly from the BBC piece. In addition, there have been enquiries from many organisations wishing to book team-building events, lessons and displays. These are coming from people with a wide range of  physical and mental ability and their representative organisations. We are keen to ensure that the activity is open to a diverse range of participants.
—originally posted here

I was also on BBC Radio Leicester to talk about LED sabre manufacture and classes. The LED sabre classes with adults and children are designed to incorporate mindfulness, confidence building and self awareness towards increasing resilience in a fun way. We teach traditional western sword arts, particularly those of the medieval hand and half. We train and spar with LED sabres, designed and manufactured locally, specifically made to be safe, ergonomic and easy to use by children and those with differing physical abilities.

5 tips for being more inclusive!

Post by Neeshé Khan (2018 Cohort)

I’m writing this blog post while travelling on a highspeed train that’s currently running from Glasgow (Scotland) back to where I need to be. Whizzing past towns is a blur of colour – to me these blotches of blurred colour represent so much life,  emotions, tears, love, loss, endless stories and experiences. Some of these stories we might get to hear about and many more never get heard.

Much like this blotch of colour, my mind is also a blur with so many thoughts, ideas and experiences that I soaked up at CHInclusionworkshop. CHI is a prestigious conference which has seriously started thinking about inclusion at their events – to open up to new audiences and making those attending feel included in this computer science community.

Having attended this workshop was particularly timely for me. I recently had the privilege of being interviewed about diversity and inclusion of females in the cybersecurity and AI fields and share my own experiences in regards to this for Women’s History Month. I don’t believe that it was something in specific that I had done but rather just that I ended up being at the right place at the right time.

I’ve also noticed how a lot of my conversations with my close support network have been focused on diversityinclusionsocial justice and equal opportunities. I thoroughly enjoy these conversations because (selfishly) they offer me great mental stimulation. I believe that these phenomena are interlinked and must be seen in a circular rather than a lateral way and think this is one of the greatest challenges we face for a better future.

These are some of my learnings from the interview, personal discussions and the CHInclusive workshop, which I hope will be inspirational to you but also in time serve as a reminder to myself – enjoy!

Everything is a two-way mirrorI came across a sentence at the workshop which deeply resonated with me. It was something along the lines of ‘see yourself in others and see others in you’. I feel this is the core of being inclusive to others. It’s important to strive to find commonalities with anyone rather than differences. It’s a two-way mirror where we must constantly try to see others’ stories and challenges in our own experiences while also seeing ourselves in their actions and choices.

2. Inclusion and diversity are separate but the same in many ways. We must think about these carefully as one is insufficient without the other. Inclusion speaks about including everyone along the way, just bring everyone along for the ride! To me, diversity essentially means taking a range of skillsets along for your ride. Even if that means that someone might offer the same skillset as you. They can still do things very differently to how you do them. One is not better or worse than the other, just different. I firmly believe diversity goes beyond gender and must start to seriously encompass ethnic minorities and truly represent diverse audiences, each participant bringing their own skillset.

3. Empower ourselves and those around you. Be more inclusive to everyone. Be more diverse in your engagements. Be mindful of your conscious and unconscious biases. We’re all people who are saying the same things in different ways. Some things we’ll inevitably love to hear and others we’ll dread but listen to them anyway. This might empower you, as well as others, to create spaces that are tolerant and encouraging of new thoughts, people and things.

4. It’s not personal. I feel it’s important to remember that any comment that might exhilarate or aggravate you, it’s not personal. No one other than you has travelled your journey, faced your challenges, overcome your adversities or experienced what you’ve experienced and how you’ve experienced it.

5. Create space for others even if you feel that space isn’t created for you. This can be especially hard to experience. Ultimately, it still allows you to offer a space that’s safe for anyone who needs it. You can foster micro-universes of interaction that will lead to others being empowered and eventually you’ll find yourself surrounded by people who offer this same space to you in a much deeper and richer way than you can imagine.

—originally posted at https://neeshekhan.wordpress.com/

Teaching Python Programming at Nottingham Girls High School

Post by Jimiama Mafeni Mase (2018 Cohort)

I participated in the outreach activity of teaching python programming language to students of Nottingham Girls High School organised by a social enterprise called Codex. Codex is a social enterprise run by students from the University of Nottingham. Selected candidates had interviews with Codex management team who were interested in the candidates’ python coding skills and their passion in teaching children. Fortunately, Codex selected a small team of computer scientist including myself to teach introduction to python for 5 weeks (i.e. 1 hour every week) from the 1st of March to the 29th of March 2019.

The syllabus for the course consisted of the fundamentals of python programming i.e. inputs, outputs, data types, maths operators, conditional statements, while loops and for loops. Each class was made up of about 15 to 20 students and lectures took place in the school’s computer lab. We taught using power point lecture notes and hands-on programming exercises. These required us to be extremely audible and patient with the students as most of them didn’t have any programming experience or knowledge. We were also required to speak fluently and make sure all the students understand the concepts and complete the exercises.

We successfully completed the course on the 29th of March and provided a link for the students to fill out surveys about their experiences and rate the teachers. I learnt some teaching skills from this outreach activity, as it was my first experience as a teacher. In addition, it enhanced my problem solving skills as we received a lot of challenging questions about certain concepts in the lecture notes and exercises. It was a great experience and opportunity to transfer some of my knowledge as a computer scientist to the younger generation, which we consider “Future leaders”. Lastly, I will love to thank Codex and the University of Nottingham for this opportunity and wish that they create many more outreach activities for children to learn computer science.

 

Andrew returns from AAAL conference in Atlanta

Post by Andrew Moffat (2015 Cohort)

American Association of Applied Linguistics

For people learning a second language, today’s hyper-connectivity has the potential to present new domains of engagement with and exposure to their target language. Exposure to the target language is accepted as a necessary condition for language learning, and it is often a key variable in classifications of learning environments. However, Internet-based communication technologies have the potential to connect learners with expert and non-expert speakers of the target language, regardless of geographical location, providing opportunities for informal learning.

Computer-Mediated Communication (CMC) has historically been approached within Applied Linguistics and Second Language Acquisition as a tool for enhancing language learning. More recently however, there has been an increased focus in investigating language learners’ pre-existing, “extramural” English-language online communicative activities, exploring their potential as a site of exposure to language and negotiation of meaning in authentic interaction, with a view to integrating this aspect of learners’ lives more deeply with their formal learning. Most of this work is small scale and qualitative in nature, and there has been relatively little large-scale fact-finding carried out to survey current practices in this area.

My talk presented the findings of a large-scale survey undertaken in partnership between the University of Nottingham and Cambridge University Press. A questionnaire asking respondents about their English-language online communication activities was promoted on CUP’s online dictionary website, receiving over 10,000 responses in a four-week period from second language English speakers all over the world. The analysis of this data set identified contexts of CMC in which English learners most frequently use their L2 as well as commonly occurring difficulties encountered therein. The talk concluded with a brief overview of an approach to incorporating and supporting English-language online activities in the classroom, thereby integrating formal and informal learning.