# HG changeset patch # User tatsuki # Date 1435180068 -32400 # Node ID 8a6f547b72c00a50c728f343fb03d13abb8d75c3 # Parent 3325edf9139fb9073894514db4a58abf8e94196a merge tatsuki slide and atton slide diff -r 3325edf9139f -r 8a6f547b72c0 pictures/GestureExample.png Binary file pictures/GestureExample.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/GestureTable.png Binary file pictures/GestureTable.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/GreetingWordTable.png Binary file pictures/GreetingWordTable.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/ImplementGestureARMARⅢ.png Binary file pictures/ImplementGestureARMARⅢ.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/MCA.png Binary file pictures/MCA.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/MMM.png Binary file pictures/MMM.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/MMMModel.png Binary file pictures/MMMModel.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/room.png Binary file pictures/room.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 pictures/tableofgreetingwords.png Binary file pictures/tableofgreetingwords.png has changed diff -r 3325edf9139f -r 8a6f547b72c0 slide.md --- a/slide.md Wed Jun 24 15:48:35 2015 +0900 +++ b/slide.md Thu Jun 25 06:07:48 2015 +0900 @@ -1,4 +1,4 @@ -title: A Novel Greeting Selection System for a Culture-Adaptive Humanoid Robot +title: A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot author: Tatsuki KANAGAWA
Yasutaka HIGA profile: Concurrency Reliance Lab lang: Japanese @@ -117,7 +117,7 @@ # Greeting selection system training data -* Mappings can be trained to an initial state with data taken from the literature of sociology studies. +* Mappings can be trained to an initial state with data taken from the literature of sociology studies. * Training data should be classified through some machine learning method or formula. * We decided to use conditional probabilities: in particular the Naive Bayes formula to map data. * Naive Bayes only requires a small amount of training data. @@ -137,7 +137,7 @@ * The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type. * We now need to update these weights. -# feedback from three questionnaires +# feedback from three questionnaires * Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not. * In the former case, the weights are directly read from the dataset * in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier. @@ -161,6 +161,165 @@ # TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb) +# Implementation on ARMAR-IIIb +* ARMAR-III is designed for close cooperation with humans +* ARMAR-III has a humanlike appearance +* sensory capabilities similar to humans( +* ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands + +# Implementation of gestures +* The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware +* manually defining the patterns of the gestures +* Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot + +# Master Motor Map +* The MMM is a reference 3D kinematic model +* providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules +* This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots +* The MMM is intended to become a common standard in the robotics community + + +# Master Motor Map2 +* The body model of MMM model can be seen in the left-hand illustration in Figure +* It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots +* A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model + + +# converter +* converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot +* differences in the kinematic structures of a human and the robot one-to-one mapping can hardly show acceptable results in terms of a human like appearance of the reproduced movement +* this problem is addressed by applying a post-processing procedure in joint angle space +* the joint angles, given in the MMM format,are optimized concerning the tool centre point position +* solution is estimated by using the joint configuration of the MMM model on the robot + +# MMM support +* The MMM framework has a high support for every kind of human-like robot +* MMM can define the transfer rules +* Using the conversion rules, it can be converted from the MMM Model to the movement of the robot +* may not be able to convert from MMM model for a specific robot +* the motion representation parts of the MMM can be used nevertheless + +# Conversion example of MMM +* After programming the postures directly on the MMM model they were processed by the converter +* Conversion is not easy +* the human model contains many joints, which are not present in the robot configuration +* ARMAR is not bending the body when performing a bow +* It was expressed using a portion present in the robot (e.g., the neck) + + +# GestureExample + + +# ImplementGestureARMARⅢ + + +# Modular Controller Architecture, a modular software framework +* The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented +* the list of postures is on the left together with the option +* When that option is activated, it is possible to select the context parameters through the radio buttons on the right + + +# Implementation of words +* Word of greeting uses two of the Japanese and German +* For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」 +* where a standard greeting like 「konnichi wa」 would be inappropriate +* In German, such a greeting type does not exist +* but the meaning of “thank you for your effort” at work can be directly translated into German +* the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts + +# table of greeting words + + + +# Implementation of words +* These words have been recorded through free text-to-speech software into wave files that could be played by the robot +* ARMAR does not have embedded speakers in its body +* added two small speakers behind the head and connected them to another computer + +# Experiment description +* Experiments were conducted at room as shown in Figure 9 , Germany + + + +# Experiment description2 +* Participants were 18 German people of different ages, genders, workplaces +* robot could be trained with various combinations of context +* It was not possible to include all combinations of feature values in the experiment +* for example there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’] +* the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home + +# Experiment description3 +* repeated the experiment more than +* for example experiment is repeated at different times +* Change the acquaintance from unknown social distance at the time of exchange +* we could collect more data by manipulating the value of a single feature + +# Statistics of participants +* The demographics of the 18 participants were as follows +1. gender :M: 10; F: 8 +2. average age: 31.33 +3. age standard deviation:13.16 + + +# tatistics of participants +* the number of interactions was determined by the stopping condition of the algorithm +* The number of interactions taking repetitions into account was 30 +1. gender :M: 18; F: 12 +2. average age: 29.43 +3. age standard deviation: 12.46 + +# The purpose of the experiment +* The objective of the experiment was to adapt ARMAR-IIIb greeting behaviour from Japanese to German culture +* the algorithm working for ARMAR was trained with only Japanese sociology data and two mappings M0J were built for gestures and words +* After interacting with German people, the resulting mappings M1 were expected to synthesize the rules of greeting interaction in Germany + +# The experiment protocol is as follows 1~5 +1. ARMAR-IIIb is trained with Japanese data +2. encounter are given as inputs to the algorithm and the robot is prepared +3. Participants entered the room , you are prompted to interact with consideration robot the current situation +4. The participant enters the room +5. The robot’s greeting is triggered by an operator as the human participant approaches + +# The experiment protocol is as follows 6~10 +6. After the two parties have greeted each other, the robot is turned off +7. the participant evaluates the robot’s behaviour through a questionnaire +8. The mapping is updated using the subject’s feedback +9. Repeat steps 2–8 for each participant +10. Training stops after the state changes are stabilized + +# Results +* It referred to how the change in the gesture of the experiment +* It has become common Bowing is greatly reduced handshake +* It has appeared hug that does not exist in Japan of mapping +* This is because the participants issued a feedback that hug is appropriate + + +# Results2 +* The biggest change in the words of the mapping , are gone workplace of greeting +* Is the use of informal greeting as a small amount of change +* some other patterns can be found in the gestures mappings judging from the columns in Table 3 for T = 0 +* Japan there is a pattern to the gesture by social distance +* But in Germany not the pattern +* This is characteristic of Japanese society +* The two mapping has been referring to the feedback of the Japanese sociology literature and the German participants + + +# Limitations and improvements +* In the current implementation, there are also a few limitations +* The first obvious limitation is related to the manual input of context data +* →  The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human +* Speech recognition system and cameras could also detect the human own greeting +* → Robot itself , to determine whether the greeting was correct +* The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected +* Accurate der than this information is information collected using a questionnaire +* It is possible to extend the set of context by using a plurality of documents +* This in simplification of greeting model was canceled + +# Different kinds of embodiment +* Humanoid robot has a body similar to the human +* robot can change shape , the size capability +* Type of greeting me to select the appropriate effect for each robot +* By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself