June 16, 2017
Spring Quarter for Project EMAR!
This is the end of Project EMAR year #1! The quarter brought a lot of new changes and developments for Project EMAR. While the tech team was busy at work creating an entirely new Version 3 of EMAR, the research team spent the past 10 weeks gathering new data from local high schools, analyzing the existing data, and drafting a research paper about our recent findings. We also finished and submitted a paper on the early research on EMAR that we will present this August at the 2017 ACM SIGDOC conference in Halifax.
Research Team
An important part of project EMAR is to share what we are learning about designing social robots for teens. As the research team looked back on all the data we had collected over the past year, we identified primary main areas of interest. The team came up with three different areas that were pertinent while doing research, which were:
(1) EMAR’s Design
(2) How teenagers engaged with EMAR
(3) Exploring the issue of trust between teens and EMAR
When we started looking into all the data the team found that we actually had some much detail, it made sense to make each section it’s own. So now we are moving forward to develop a paper based on each topic.
Field Research
We visited two more high schools this quarter and captured even more data on teen interactions with EMAR. Our current themes of the cuteness factor and engagement remained prominent even in our new field tests.
Engagement
The team decided to focus on the engagement aspect of EMAR. One moment that really stood out to the research team was when a male student hugged EMAR and boys in the background were also looking affectionately towards the interaction. This idea that a teenage boy, would hug EMAR was a surprising finding to the entire team. Another item we emphasized in the paper was the idea of social referencing. That is, whenever EMAR responded, the participant interacting with EMAR would look over to his or her friends either pointing out what EMAR was doing or just sharing the moment they were experiencing.
Re-Enactment
We have gathered amazing video footage from teen interactions with EMAR. But because we are not allowed to share any of our raw data from our research in the field, the team came up with a creative solution to share that data in our current paper. We decided to re-enact the interactions (with full disclosure of course) between the teens and the robots, using undergraduate students as stand ins. Thanks to Dorothy Wong and her photography skills, we were able to re-enact sharable moments very similar to those we captured in the real world.
The Technology Team
The goal of the quarter for the tech team was to create EMAR v3. The foundation had been laid out in the previous quarter, and this quarter is where lots of the hard work took place. The goals of EMAR v3 were to creating a more stable, yet flexible, system that could be altered and maintained by anyone, including someone with little technical experience.
Upon much deliberation, we decided to upgrade EMAR’s platform from Arduino to Raspberry Pi. We also created an online content management system (CMS), that could allow admins to control EMAR’s interactions through a simple user interface. This means that the researchers collecting data with EMAR could make changes without having to get into the code. For example, we can now control eye movement, sound files, displayed text, and responses all through an intuitive and flexible CMS. We also included a number of physical upgrades to EMAR such as onboard audio, onboard video and sound data collection, a higher-fidelity screen, and a more responsive system.
These requirements called for a number of technical accomplishments this quarter. The Raspberry Pi had to control EMAR’s eyes without having baked-in code in the Arduino like in V2. A new body had to be built that was scaled to fit the new system. A server had to be created to store EMAR’s CMS. A cloud-based system had to be implemented and connected, where EMAR incoming data could be stored. Onboard audio and a webcam had to be implemented into EMAR’s body and also controlled via the server. In addition, we developed a new version of EMAR’s UI.
In short, the current EMAR V3 works like this:
EMAR fetches survey data from the CMS, which was previously filled out by a survey administrator.
Upon, the first user user interaction, EMAR turns on the webcam and begins asking questions based on the survey data, controlling eyes, text, and sound as
necessary.
When the user completes, the interaction, EMAR turns off the webcam and uploads the video to the cloud.
After there have been a certain number of interactions (currently around 50), EMAR uploads a spreadsheet containing the survey data to the cloud for easy access.
This new iteration of EMAR allows for easy customization and iteration of the internal code as well as more stability – no more loose Arduino wires in the field! We hope the system to be more stable and reliable for field testing, and more flexible by allowing anyone (including non-tech researchers) to edit and control EMAR’s functions.
Stay Tuned! Project EMAR Year 2 is coming…