changeset 9:8a6f547b72c0

merge tatsuki slide and atton slide
author tatsuki
date Thu, 25 Jun 2015 06:07:48 +0900
parents 3325edf9139f
children 62f384a20c2c
files pictures/GestureExample.png pictures/GestureTable.png pictures/GreetingWordTable.png pictures/ImplementGestureARMARⅢ.png pictures/MCA.png pictures/MMM.png pictures/MMMModel.png pictures/room.png pictures/tableofgreetingwords.png
diffstat 10 files changed, 162 insertions(+), 3 deletions(-) [+]
line wrap: on
line diff
Binary file pictures/GestureExample.png has changed
Binary file pictures/GestureTable.png has changed
Binary file pictures/GreetingWordTable.png has changed
Binary file pictures/ImplementGestureARMARⅢ.png has changed
Binary file pictures/MCA.png has changed
Binary file pictures/MMM.png has changed
Binary file pictures/MMMModel.png has changed
Binary file pictures/room.png has changed
Binary file pictures/tableofgreetingwords.png has changed
--- a/	Wed Jun 24 15:48:35 2015 +0900
+++ b/	Thu Jun 25 06:07:48 2015 +0900
@@ -1,4 +1,4 @@
-title: A Novel Greeting Selection System for a Culture-Adaptive Humanoid Robot
+title: A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot
 author: Tatsuki KANAGAWA <br> Yasutaka HIGA
 profile: Concurrency Reliance Lab
 lang: Japanese
@@ -117,7 +117,7 @@
 <img src="pictures/model_overview.png" style='width: 75%; margin-left: 120px;'>
 # Greeting selection system training data
-* Mappings can be trained to an initial state with data taken from the literature of sociology studies.
+* Mappings can be trained to  an initial state with data taken from the literature of sociology studies.
 * Training data should be classified through some machine learning method or formula.
 * We decided to use conditional probabilities: in particular the Naive Bayes formula to map data.
 * Naive Bayes only requires a small amount of training data.
@@ -137,7 +137,7 @@
 * The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type.
 * We now need to update these weights.
-# feedback from three questionnaires    <!-- FIXME : redundancy -->
+# feedback from three questionnaires
 * Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not.
 * In the former case, the weights are directly read from the dataset
 * in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier.
@@ -161,6 +161,165 @@
 # TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb)
+# Implementation on ARMAR-IIIb
+* ARMAR-III is designed for close cooperation with humans
+* ARMAR-III has a humanlike appearance
+* sensory capabilities similar to humans(
+* ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands
+# Implementation of gestures
+* The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware
+* manually defining the patterns of the gestures
+* Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot
+# Master Motor Map
+* The MMM is a reference 3D kinematic model
+* providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules
+* This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots
+* The MMM is intended to become a common standard in the robotics community
+<img src="pictures/MMM.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Master Motor Map2
+* The body model of MMM  model can be seen in the left-hand illustration in Figure 
+* It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots
+* A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model
+<img src="pictures/MMMModel.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# converter
+* converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot
+* differences in the kinematic structures of a human and the robot one-to-one mapping can hardly show acceptable results in terms of a human like appearance of the reproduced movement
+* this problem is addressed by applying a post-processing procedure in joint angle space
+* the joint angles, given in the MMM format,are optimized concerning the tool centre point position
+* solution is estimated by using the joint configuration of the MMM model on the robot
+# MMM support 
+* The MMM framework has a high support for every kind of human-like robot
+* MMM can define the transfer rules
+* Using the conversion rules, it can be converted from the MMM Model to the movement of the robot
+* may not be able to convert from MMM model for a specific robot
+* the motion representation parts of the MMM can be used nevertheless
+# Conversion example of MMM
+* After programming the postures directly on the MMM model  they were processed by the converter
+* Conversion is not easy
+* the human model contains many joints, which are not present in the robot configuration
+* ARMAR is not bending the body when performing a bow
+* It was expressed using a portion present in the robot (e.g., the neck)
+# GestureExample
+<img src="pictures/GestureExample.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# ImplementGestureARMARⅢ
+<img src="pictures/ImplementGestureARMARⅢ.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Modular Controller Architecture, a modular software framework
+* The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented
+* the list of postures is on the left together with the option
+* When that option is activated, it is possible to select the context parameters through the radio buttons on the right
+<img src="pictures/MCA.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Implementation of words
+* Word of greeting uses two of the Japanese and German
+* For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」
+* where a standard greeting like 「konnichi wa」 would be inappropriate
+* In German, such a greeting type does not exist
+* but the meaning of “thank you for your effort” at work can be directly translated into German
+* the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts
+# table of greeting words
+<img src="pictures/tableofgreetingwords.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Implementation of words
+* These words have been recorded through free text-to-speech software into wave files that could be played by the robot
+* ARMAR does not have embedded speakers in its body
+* added two small speakers behind the head and connected them to another computer
+# Experiment description
+* Experiments were conducted at room as shown in Figure 9 , Germany
+<img src="pictures/room.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Experiment description2
+* Participants were 18 German people of different ages, genders, workplaces
+* robot could be trained with various combinations of context
+* It was not possible to include all combinations of feature values in the experiment
+* for example  there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’]
+* the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home
+# Experiment description3
+* repeated the experiment more than
+* for example  experiment is repeated at different times
+* Change the acquaintance from unknown social distance at the time of exchange
+* we could collect more data by manipulating the value of a single feature
+# Statistics of participants
+* The demographics of the 18 participants were as follows
+1. gender :M: 10; F: 8
+2. average age: 31.33
+3. age standard deviation:13.16
+# tatistics of participants
+* the number of interactions was determined by the stopping condition of the algorithm
+* The number of interactions taking repetitions into account was 30
+1. gender :M: 18; F: 12
+2. average age: 29.43
+3. age standard deviation: 12.46
+# The purpose of the experiment
+* The objective of the experiment was to adapt ARMAR-IIIb greeting behaviour from Japanese to German culture
+* the algorithm working for ARMAR was trained with only Japanese sociology data and two mappings M0J were built for gestures and words
+* After interacting with German people, the resulting mappings M1 were expected to synthesize the rules of greeting interaction in Germany
+# The experiment protocol is as follows 1~5
+1. ARMAR-IIIb is trained with Japanese data
+2. encounter are given as inputs to the algorithm and the robot is prepared
+3. Participants entered the room , you are prompted to interact with consideration robot the current situation
+4. The participant enters the room
+5. The robot’s greeting is triggered by an operator as the human participant approaches
+# The experiment protocol is as follows 6~10
+6. After the two parties have greeted each other, the robot is turned off
+7. the participant evaluates the robot’s behaviour through a questionnaire
+8. The mapping is updated using the subject’s feedback
+9. Repeat steps 2–8 for each participant
+10. Training stops after the state changes are stabilized
+# Results
+* It referred to how the change in the gesture of the experiment
+* It has become common Bowing is greatly reduced handshake
+* It has appeared hug that does not exist in Japan of mapping 
+* This is because the participants issued a feedback that hug is appropriate
+<img src="pictures/GestureTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Results2
+* The biggest change in the words of the mapping , are gone workplace of greeting
+* Is the use of informal greeting as a small amount of change
+* some other patterns can be found in the gestures mappings judging from the columns in Table 3 for T = 0
+* Japan there is a pattern to the gesture by social distance
+* But in Germany not the pattern
+* This is characteristic of Japanese society
+* The two mapping has been referring to the feedback of the Japanese sociology literature and the German participants
+<img src="pictures/GreetingWordTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'>
+# Limitations and improvements
+* In the current implementation, there are also a few limitations
+* The first obvious limitation is related to the manual input of context data
+* →  The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human
+* Speech recognition system and cameras could also detect the human own greeting
+* → Robot itself , to determine whether the greeting was correct
+* The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected
+* Accurate der than this information is information collected using a questionnaire
+* It is possible to extend the set of context by using a plurality of documents
+* This in simplification of greeting model was canceled
+# Different kinds of embodiment
+* Humanoid robot has a body similar to the human
+* robot can change shape , the size capability
+* Type of greeting me to select the appropriate effect for each robot
+* By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself
     .slide.cover H2 { font-size: 60px; }