December 5, 2017
Project EMAR: Usability Study Findings and a Sneak Peek at V4!
Analyzing the data for iteration
This week the whole group met to discuss and analyze the findings from the usability study and the feedback gathered from teens as a whole. The main pain points of the website were established, so the Web Team could focus on where to make iterations. Our findings showed difficulty with navigation, so the team began searching for a new site template to allow for more user-friendly navigation. Some of the other areas being addressed include:
- Fine-tuning the content to make sure the purpose of the site is concise and clear
- Visibility of contact information
- Deciding on the best information architecture for presenting the videos and the steps of the human-centered design process
- Making sure the visual design is consistent across pages and sections
Student achievement
Soobin Kim, a member of our group on Team Video, presented his work on the Design Challenge videos at his symposium at the University of Washington Tacoma.
Learning more about V4
Alisa Kalegina is a Graduate Research Assistant for Project EMAR in the Human-Centered Robotics Lab. She is supervised by co-PI, Maya Cakmak and working on the fourth iteration of robot EMAR. EMAR V4 has some exciting differences from previous versions. Similar to V3, V4 has a tablet for the screen on the body, but V4 will also have a tablet for the face. The tablet provides greater flexibility for facial expression and a more customizable design. In addition, V4 will also use a web application instead of native programming which will provide a much broader ability to respond to users.
Alisa gave the following insight into using a tablet for the face:
“One of the key aspects of using a rendered face is precisely its ability to be customized. EMAR V3’s analog face is, in a way, a top-down solution, since the LED eyes and animations cannot be customized. So although the tablet may take away from the analog feeling of the prototype, it is at the same time very freeing, in that the teenagers themselves can affect how the face looks, and provide input for designing the faces of subsequent prototypes. In order for this to be a robot that is fully created through participatory design, the face ought not be exempt.”
Maya elaborates on this research:
“In this work we identified 157 rendered robot faces and annotated them in terms of 76 face properties, like what color the face is, the size and shape of the eyes, or whether the face has a nose, a mouth, eyebrows, pupils, among many others. This gives us a large list of properties we can vary on our customizable V4 face. We also did some questionnaires that revealed how people’s perception of the face changes with these face properties, in terms of how trustworthy, friendly, or intelligent it was among other things. This gives us a place to start for designing robot faces that are likely to have desirable perceived properties, but we need to do more work focusing on teen-agers and understanding their preferences.”