Curious Conversations

My master's project that enables children to explore their science interests by having discussions with a virtual buddy named Trazo by leveraging the Amazon Alexa device in households. 


Background

It is critical to provide opportunities for young children to engage in meaningful conversations about science to foster long-term interest and curiosity in science-related topics. However, due to the various topics elementary students learn in classrooms, they spend limited time learning about their favorite science topics. Although many platforms aim to provide informal learning experiences for children at home, they fail to promote a two-way conversation.

Timeline

Prior Studies Show

Elementary students spend about 20 minutes daily on science instruction as compared to 60 minutes for math and 90 minutes for reading and language arts. (Sparks, 2021)

When they come back home, parents do not know how to answer their child's questions about complex scientific ideas in a way that their child can understand. (Silander et al., 2018)

Discovery

Research Goal

To better understand and inform the design of the new product that would help users learn about their favourite science topics from the comfort of their homes. 

Questions

What platforms do users currently use to learn?

What topics do the users like learning about within science?

What are the concerns parents have as their children are learning?

To find the answers to these questions, we decided to conduct a survey

US | 154 Parents | 3 Weeks Project

My Role


Process & Challenges

Survey Design

Basic Survey Structure

  • Childs learning habits
  • Their usage of device at home
  • Platforms they use with further questions based on the category they choose from among the following: Shows, Games, and Audio-based platforms
  • Parental needs and wants when their child is learning

Piloted Drafts

Small sample size consisting of Stanford Affiliated members

Launched Final Survey over Prolific

Exporting the data collected from Qualtrics into a Google Sheet spreadsheet for further analysis

Filtering Noise from Data

Some data collected would have had negative effect on the results. Some areas of concerns that I saw in the survey were: 

  • Completing the survey in a shorter duration of time. 
  • Answers were similar to each question (For instance, one of the participants selected A throughout some parts of the question. 
  • Participants dropped the survey towards the middle or end. 
  • Participating entering gibberish in some of the fields just to complete their entry.

Example of data that I filtered out from the sheet

Due to the sample size, having these categories of data would have caused inconsistencies among the result and so letting go of 22 responses was a good idea. After getting rid of these anomolies, I relaunched the survey to complete my sample size. 

Collecting all responses and coding them by hand! 

Before using descriptive statistics

  • Form categories for the responses that the user had added to the form. 
  • The confidence scores were converted into a point scale and then their answers were weighed according to their confidence. 

Since this process was time consuming, I decided to improve the process for future use

Following my data collection, I embedded a formula within the google sheet so the process could be automated for us later when we send out the surveys. Doing this helped make the process more scalable and the technique was used later on when we used the survey and saved a lot of time. 

Example of a chart that resulted from our study

Descriptive Statistics

After re-scaling all responses, I needed to calculate: 

  • The average score: The mean
  • The standard deviation: To understand how much shared difference each variable had 
  • The 95% Confidence Interval: To show how much of a range the population had from the mean
  • The Standard Error of Measurement: To determine how precise the measurement is, the smaller the SEM, the more precise the measurement capacity of the instrument.

This was done for: 

Usage of devices (iPad, Kindle, Tablet, Laptop etc)

Platforms they use (Audio-based, games and shows)

Concerns of parents


Impact & Results

The Graphs were then presented to the other team members with design recommendations for the product which were implemented next...

As we visualized the data, we uncovered some key insights into the problem that we wanted to address: 

86% 

Parents are concerned about the time their child spends on the screen everyday. 

57%

Children are excited about astronomy and like learning more adventure stories. 

43%

Parents see their child using smart speakers whereas some of them use Tablets. 

32% 

Children use Alexa particularly the Kids version whereas some use Google Homes.

Next, the prototype was designed based on these insights

Based on the feedback that was collected, the team members designed the experience of the product and that prototype was tested in our next phase! 

Prototype

Developed on the Alexa device to help children learn more space and astronomy. It starts with an icebreaker question that helps children learn about their future in space to develop their interest. This is followed by a story called "Theres a hole in my galaxy" which revolves around a girl who undertakes a space adventure with her friends. Towards the end of the story, there is a drawing activity and a question that rates the child's learning experience with a personalised response. 

Testing of the Prototype

Research Goal

Understand the First Time User Experience (FTUE) when users are interacting with Curious Conversations to identify the factors that could be improved while learning more about science topics. 

Questions

How easy or difficult is it for users to interact with the prototype?

What are the goals and expectations of the users?

What elements were missing in the first design iteration? 

What do the users think about the story content & questions?

My Role


Process & Challenges

Recruiting Participants

Due to the limited resources and the ongoing Covid-19 pandemic, we had to reduce the number of participants for the study. As part of the recruitment process, I made an email that described the incentive for participating, the time required, and a link to the screener questions and mailed it out online forums such as NextDoor, Stanford Parents community and Facebook groups. From there, I compiled a shortlist of participants based on the age, gender, background and technology usage of our participants. In the end, we got a good mix of participants gender and backgrounds. 

Participant Demographics

Children between the ages of 8-12 as our targeted audience

Had mixed experience with technological devices

Prior experience using conversational agents at home

Procedure and Methodology

Moderated Usability Testing Study

The study was conducted in the Stanford backyard with 15 children where they were first introduced to the Alexa device and the conversation was recorded after getting a consent form signed by the parent. The session was an hour long and children were given tasks and asked to "think aloud" while they are completing the tasks. At the end of the tasks, they were also asked a set of questions to get their overall impressions of the product.

 

Why Usability Testing?

Wanted to explore our prototype with a younger age group to get feedback on the content and duration. 

Observe their reactions & engagement levels when they first used the prototype. 

Witness the pain-points & suggestions without any adults intervening.


Impact & Results

Analyzing Data

Participants did not know when to start speaking when immersed in the storyline

  • 9 out of 15 participants started answering the question before prompted by the agent. 
  • 3 out of 15 waited to hear Alexa talk even after the question. 

Participants felt that the length & flow of the story was very engaging

  • 9 out of 15 participants enjoyed the story when it ended. 
  • 8 out of 15 wanted more activities for them to perform.

Participants wanted more activities and questions towards the end of the story

  • 3 out of 15 participants wanted to have more questions.
  • 5 out of 15 participants wanted other activities at the end.

Design Recommendations provided to the team

Based on the feedback that was collected, the team members designed the experience of the product and that prototype was tested in our next phase! 

Conversational Agent plays a tune after asking the child to turn the page

Prompt is not a separate agent but embedded within the story. 

Question is removed from the story and only the CA would ask the question. 

Option provided to skip and work on additional activities 

Measuring our success

Based on the feedback, the team was able to make another design iteration that was more user friendly and we were able to raise funding from StartX for $1,500. We were also able to present our project at the Expo to esteemed faculty and industry professionals in this area. You can check out our project on the Stanford Website here and also watch our project video below: 


Learnings & Next Steps

  • Conducting more in person testing to ensure that our product meets the needs of our users. 
  • Testing with a diverse population to see if our product matches their goals and expectations. 
  • Adding more diverse stories of different topics to appeal to the childrens interests based on their backgrounds. 
  • Currently, our project runs on beta mode and the hope is to establish this as a fully functional prototype and deploying it on Alexa. 

You May Also Like...

Osmo Internship

Budsies

Digital Learning Solutions Lab Internship

Made in Dorik

© 2022 Zoha Salman, All Rights Reserved

Built with Dorik