September 2016 - December 2016
OVERVIEWProject EMAR is a part of a directed research group within the department of Human Centered Design & Engineering at the University of Washington. EMAR (Ecological Momentary Assessment Robot) is a social robot that measures teen stress.
This page only outlines my contributions to the research effort, information about the overall research can be found in the official blog (link at the bottom). |
PROBLEM STATEMENT
Adolescents are subject to high levels of stress in their lives, resulting from school, relationships, and family life. Teens experience more chronic stress than any other age group, and are extremely vulnerable to the negative effects of stress.
VISION STATEMENT
To develop a social robot that can measure teen stress. EMAR will live at local high schools and serve as a peer to students by interacting with students and collect data about their stress levels. By understanding how students are truly feeling, school communities can better address stress.
This approach is a promising intervention because:
This approach is a promising intervention because:
- Robots offer an authentic, engaging way for teens to share their emotions
- Robots inherently collect data that can help adults approach the topic of teen stress
- Initial research shows that students are more likely to obey physical robots than a computer or screen
- It's innovative. Teens are an unexplored population in robot design
TEAM
This research is facilitated by Elin Björling, Andrew Davidson, and Emma Rose. There were 14 student researchers included myself. To make use of the diverse group talent, we split into 3 subgroups which include: physical computing, social interactions, and feeling heard.
I was a part of the 'feeling heard' group which consisted of 4 members: Rutha Nuguse, TJ Koines, Rachel Ren, and myself. My team addressed human-robot interaction and psychology. In particular, we focused on the question, "how can we design EMAR in such a way that helps users feel heard?"
I was a part of the 'feeling heard' group which consisted of 4 members: Rutha Nuguse, TJ Koines, Rachel Ren, and myself. My team addressed human-robot interaction and psychology. In particular, we focused on the question, "how can we design EMAR in such a way that helps users feel heard?"
RESEARCH QUESTIONS
INVESTIGATION
The first step towards coming up with EMAR's script and eye movement involved researching existing literature relating to taglines such as: "human-robot interaction," "robot responses to humans," and "robot designs." By gaining an understanding of the features on existing robots, my team could have a better idea of how we could implement changes to EMAR's eyes so that it can provoke the feeling of "being heard." The literature was also used to support why EMAR's eyes should look and react certain ways.
FINDINGS
FINDINGS
- Speech recognition is key to the sense of feeling engaged and understanding one another
- A high-pitched voice robot is perceived as more attractive and has a stronger personality
- Use of humor makes robots more likable and trustworthy
- People do not like being prescribed solutions to their problems (ex: being told what to do)
- Emotional expressions using bruises and complexion successfully deliver the positive and negative affect of a robot as much as speech expression
- Facial expressions have been commonly used to express robotic emotions. This includes: body movement, posture, orientation, color, and sound
- Eye contact provides acknowledgement
- Robotic eye acknowledgements exist via blinking, color change, widening/shrinking pupil
IDEATION
I contributed to team meetings by sharing ideas for how EMAR's eyes can respond to user input. Both of my ideas progressed into the ideation stage at which point my group members and I sketched the designs (as shown below).
Happy - Presents during greets and positive messages/input (Ex: EMAR says: Hi, nice to meet you! I'm EMAR.)
Wide - Presents when EMAR is waiting for user to input a response (Ex: EMAR says: May I ask you a question?)
Small - Presents when neutral messages and responses are delivered (Ex: EMAR says: Ok. Find me later if you feel like talking)
Sad - Presents when user input is really negative (Ex: EMAR says: How do you feel right now? User responds: Very bad)
To accommodate the eye variations, my team also designed the accompanying script. From literature review, we gathered that eye responses elicit more empathy than speech, so we decided to control our script. In the near future, there will be two robots with varying eyes, however, for the scope of this project, we were only able to produce one working robot. The team decided to move forward with the varying pupils.
Wide - Presents when EMAR is waiting for user to input a response (Ex: EMAR says: May I ask you a question?)
Small - Presents when neutral messages and responses are delivered (Ex: EMAR says: Ok. Find me later if you feel like talking)
Sad - Presents when user input is really negative (Ex: EMAR says: How do you feel right now? User responds: Very bad)
To accommodate the eye variations, my team also designed the accompanying script. From literature review, we gathered that eye responses elicit more empathy than speech, so we decided to control our script. In the near future, there will be two robots with varying eyes, however, for the scope of this project, we were only able to produce one working robot. The team decided to move forward with the varying pupils.
IMPLEMENTATIONOnce my team settled on a design idea, we handed the prototype off to the developers (the physical computing team) who hard-coded the eye movements into EMAR.
USER-TESTINGAfter every implementation, we conducted focus groups and guerrilla user testing to evaluate the user's ability to feel heard when interacting with EMAR.
We discovered code defects that hindered the user's ability to fully experience interacting with EMAR properly. Though most of the defects related to EMAR's audio and display synchronization, some key findings related to EMAR's eyes include:
|
REFLECTION
Successes
Limitations
Recommendations
- Almost all participants found EMAR to be engaging
- Participants found it intuitive use EMAR's digital chest for interaction
Limitations
- Difficult to determine how successful EMAR is at measuring teen stress computer bugs hinder the ability for seamless usability test sessions
- Only had one opportunity to engage the target population (high school teens) in the design cycle
- The project was divided into 3 subgroups (Feeling Heard, Social Interactions, and Physical Computing), which caused lack of opportunity to communicate and touch base with each other
- Developers had to make lots of changes quickly given the short project timeline
Recommendations
- Conduct regression tests and heuristic evaluations more frequently--especially after each code implementation to prevent presenting defects to participants during usability test sessions
- Allot more time for subgroups to communicate updates, questions, and concerns