Pages

Thursday, September 6, 2012

Assignment 1 - Paper Reading #4

Intro:
  • Paper reading #4: Children and Robots Learning to Play Hide and Seek
Reference information:
  • J. Gregory Trafton, Alan C. Schultz, Dennis Perznowski, Magdalena D. Bugajska, William Adams, Nicholas L. Cassimatis, and Derek P. Brock. 2006. Children and robots learning to play hide and seek. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (HRI '06). ACM, New York, NY, USA, 242-249. DOI=10.1145/1121241.1121283 http://doi.acm.org.lib-ezproxy.tamu.edu:2048/10.1145/1121241.1121283 
Author Bios: 
  • J. Gregory Trafton
    • Princeton University, 1989–1994. Ph.D in Psychology, June 1994
    • Princeton University, 1989–1991. M.A. in Psychology, 1991
    • Trinity University, 1985–1989. B.S. in Computer Science; second
      major in Psychology, 1989
    • 2006–present: Section head for the Intelligent Systems Section
      for the Naval Research Laboratory
    • 1995–2006: Engineering Research Psychologist for the Naval Research
      Laboratory
    • Research Interests: Computational Cognitive Modeling, Human-Robot Interaction,
      Interruptions and Resumptions, Graph Comprehension (including scientific visualization) 
  • Alan C. Schultz
    • Publication years: 1986-2009
    • Publication count: 32
    • Citation Count: 232
















  • Dennis Perznowski
    • Currently: Adjunct Lecturer at George Mason University
    • Currently: Supervisory Computational Research Linguist at Naval Research Laboratory
    • Assistant Professor at Utica College of Syracuse University
    • Assistant Professor and Assistant Academic Dean at The College at Lincoln Center, Fordham University
    • Education: New York University, La Salle University, Temple University
  • Magdalena D. Bugajska
    • Affiliated with: 
      • University of South Florida 
      • Naval Research Laboratory 
      • Rensselaer Polytechnic Institute
    • Publication years: 1999-2010
    • Publication count: 12
    • Citation Count:77
  •  William Adams
    • Affiliated with:
      • Naval Research Laboratory
    • Publication years: 1992-2009
    • Publication count: 18
    • Citation Count:93
  • Nicholas L. Cassimatis
    • Received a bachelor's degree in mathematics from MIT and a master's degree in psychology from Stanford.  
    • Studied artificial intelligence at the MIT Media Laboratory
    • NRC postdoctoral associate at the Naval Research Laboratory's AI Center for two years
    •  Faculty of the Cognitive Science Department at Rensselaer since th

  • Derek P. Brock 

    • US Naval Research Laboratory 
    • Publications: 44
    • Citations: 401
    • Interests: Artificial IntelligenceHuman-Computer InteractionEngineering
Summary:

This paper how children and robots interact and can learn from another by playing a game of hide and seek. An AI robot and one of the author's daughters were used to conduct an experiment that would determine whether or not humans and robots can learn from another and if robots can learn complex tasks such as viewing ideas through another person's perspective. The study was conducted twice. Once when the child, named "E", was 3 1/2 years old and again once "E" was 5 1/2 years old. Each time at the two different ages, "E" was asked to hide somewhere in her own house, and the robot was to find her and based on her hiding spot, "learn" about hiding and give feedback about the hiding spot. After all these trials, the robot was asked to hide and "E" was to seek. There was also an experiment of perspective-taking where "E" was asked to determine which hand was a persons left and right hand when the person was facing the same direction and facing towards "E". The study showed that once children are given feedback, they learn very quickly and their perspective-taking ability increases dramatically from 3-5 years old.

The robot turning 180 degrees to “close” its eyes.

Related work not referenced in the paper:


  1. Spatial relation model for object recognition in human-robot interaction - This paper describes how a spatial relation model can be used for robots to interpret a user's spatial relations description. The robot would be able to detect the user's target object by asking the user about the spatial relationship of the object with other objects in the room. This can be used to help elderly and handicapped persons and is related to the current paper because it requires a human to interact with a robot.

  2. Spatial language experiments for a robot fetch task - This paper outlines a new study that investigates spatial language for use in communication between humans and robots. Similar to the paper above, the scenario studied is a home setting in which the elderly person has misplaced an object, and the robot will help the resident find the object. This also requires human and robot interaction.

  3. Multimodal approach to affective human-robot interaction design with children - Two studies are performed between humanoid robots and the influence on children's behavior. The first study looked at the styles of interaction and the general features of robots. The second study looked at how the robot's attention influences a child's behavior and engagement. Requires humans to interact with robots, too.

  4. The icat as a natural interaction partner: playing go fish with a robot - This paper studies how iCat, a robot platform, can be used to perform experiments with children to test perceived effects of socio-cognitive abilities. Two different versions of iCat were developed: a socio-cognitive iCat robot that behaves socially and takes the mood of the child into account, and an ego-reactive iCat robot that does not. These 2 robots were evaluated/compared with each other in a scenario where the robot plays Go Fish with a child.Similar to current paper because of cognitive science and child interaction with a robot.

  5. Robots asking for directions: the willingness of passers-by to support robots - This paper reports about a human-robot interaction study conducted with an autonomous mobile robot in a public place, where the robot needs help from human passers-by to find its way to a target location. This require robot and human interaction and usage of interpreting human feedback to find the target location.

  6. Adaptive eye gaze patterns in interactions with human and artificial agents - This paper investigates gaze patterns in both human-human and human-agent interactions to study how humans and robots collaborate and interact with one another

  7. Conversational gaze mechanisms for human-like robots - The current work studies people's use of conversational gaze mechanisms and how they might be designed/implemented in human-like robots, and whether these signals effectively shape human-robot conversations. Similar to #6.

  8. Eliciting caregiving behavior in dyadic human-robot attachment-like interactions - This paper studies a Sony AIBO robot, an arousal-based model , behavior during the exploration of a children's play mat. Also studies the interaction between humans and robots.

  9. Development of a cognitive model of humans in a multi-agent framework for human-robot interaction - In this paper, a user-centric multi-agent framework for robot control and human-robot interaction and a cognitive agent are used to study effective communication between humans and robots.

  10. Computational visual attention systems and their cognitive foundations: A survey -

    This article studies a broad range of applications of computational attention systems in fields like computer vision, cognitive systems, and mobile robotics and how they can be used to effectively communicate and interact with humans.

    The current paper's contribution is not that new. Plenty of studies have been performed with robots and humans interacting, however,  no studies have been performed using a child and a robot playing hide and seek.

Evaluation:

The study was evaluated quantitatively using an objective measure by determining the percentage of correctly guessing a person's left and right hands by "E". At age 3 1/2, "E" was able to determine a person's left and right hand correctly 100% of the time when the person was facing in the same direction. But, when a person was facing toward her, she was less than 50% accurate, which shows that she had a egocentric bias and her perspective-taking abilities were not up to par yet. At age 5 1/2, "E" was able to determine a person's left and right hand correctly 100% of the time when the person was, both, facing towards and away from her. This result shows that her perspective taking ability grew dramatically in 2 years. As for the other part of the study, I guess it would be a qualitative approach  using a objective measure because it measure a human's learning process by the places that were used to hide. This measure was also used when the robot was seeking to show how it had learned from watching "E" hide. The results are shown in the tables below.

------------------------------------------------------
Game No. | Hiding Location | Hiding Type
1 | eyes-closed | can’t see me if I can’t see you
2 | out-in-open | understanding rules of game
***suggestion don’t hide out in the open***
3 | under piano | Under
4 | in laundry room | Containment (room)
***break***
5 | under piano | Under
6 | in laundry room | Containment (room)
7 | in bathroom | Containment (room)
8 | in her room | Containment (room)
9 | under chair | Under
10 | behind bedroom door | Containment or behind
11 | under chair | Under
12 | under covers | Under or containment
13 | under covers | Under or containment
14 | in bathroom | Containment
15 | under glass coffee table | Under
Summary of where E hid at age 3 ½
-------------------------------------------------------

At 3 1/2, "E"'s first 2 games were where she was learning the rules of the games. The robot then gave a suggestion of not hiding in the open, and the table shows that her ability to hide grew. She no longer hid in the open, but behind and under objects, which shows her perspective-taking ability growing. However, at game 15, although she hid underneath a table, she still had not learned about the how the transparency of an object affects her ability to hide. The table shows that children respond to feedback very well at a young age and can learn quickly from feedback.

-----------------------------------------------------------------
Game No. | Hiding Location Hiding | Type
1 | Behind stuffed animals | Behind
2 | Behind boxes | Behind
3 | Inside her closet | Containment (room)
4 | Behind a table (moving to keep away from It’s view) | Behind
5 | Underneath a chair | Under
6 | Behind a chair | Behind
7 | Behind a bassinette | Behind
8 | Under a table | Under
9 | Behind a chair (moving to keep away from It’s view) | Behind
10 | Behind bedroom door | Containment or behind
Summary of where E hid at age 5 ½
-----------------------------------------------------------------

At 5 1/2, "E"'s has already demonstrated that her perspective taking ability has grown significantly. She was no longer hiding in open spaces and was not hiding in places where she could see the robot but the robot could not see her. She was even moving as the robot moved to continuously hide from it. This table shows that her perspective-taking abilities had improved well enough over two years that her ability to hide improved, too.

 The robot had learned from "E" well. When the robot hid, it would try to hide behind or under objects, although the first few times it simply hid in the open with its eyes closed (back facing the seeker), just like "E" had done. When it came time for the robot to seek, it also learned from "E" to search in a spot where it would possibly hide. It first would make a list of all the places it would hide in the house (good hiding spots were ranked first and were ranked based on ability to hide behind or under/in) and then would systematically go to each hiding spot 1 by 1 to look for "E". This showed the robot was able to "learn" some perspective-taking ability from "E".

The study was not systemic because it only measured a certain area of the human thought process (perspective-taking) and the computer-human interaction process.



Discussion: 

This was an awesome paper to read because it studied not only computer-human interaction, it also studied the human mind and how children's cognitive abilities grow as they grow. Overall I thought the idea was cool and I like the fact the authors performed the study by playing a game of hide and seek. I believe the evaluation was appropriate because it not only measured "E"'s perspective abilities, but it studied how the robot learned from "E" and grew as the games went on. The contribution is not that new. Plenty of studies have been performed with robots and humans interacting, however, I do not know if there has ever been a study where a robot learns from a child and vise versa by playing a game of hide and seek. I found this paper really interesting and there was a link to the video of the interactions, but for some reason it does not work. The human mind is crazy and it's weird to think about another human thinking. Robots may be able to do math a lot faster that us, but they will never have a brain that is as complex as a humans.

No comments:

Post a Comment