changeset 3:c93a37ff6c79

Wrote slides to section 2
author Yasutaka Higa <e115763@ie.u-ryukyu.ac.jp>
date Thu, 18 Jun 2015 22:50:52 +0900
parents 45f5a93790db
children 51b87e0db067
files slide.md
diffstat 1 files changed, 62 insertions(+), 1 deletions(-) [+]
line wrap: on
line diff
--- a/slide.md	Thu Jun 18 16:06:36 2015 +0900
+++ b/slide.md	Thu Jun 18 22:50:52 2015 +0900
@@ -87,7 +87,7 @@
 * Only two individuals (a robot and a human participant): we do not take in consideration a higher number of individuals.
 * Eye contact is taken for granted.
 * Age is considered part of 'power relationship'
-* Regionality is not considered.
+* Regionally is not considered.
 * Setting is not considered
 
 # Model of Greetings: Assumptions (6 - 10)
@@ -97,6 +97,67 @@
 * Time since the last interaction is partially included in 'social distance'
 * Intimacy and politeness are not necessary
 
+# Model of Greetings: Basis of classification
+* Input
+    * All the other factors are then considered features of a mapping problem
+    * They are categorical data, as they can assume only two or three values.
+* Output
+    * The outputs can also assume only a limited set of categorical values.
+
+# Model of Greetings: Features, mapping discriminants, classes, and possible status
+TODO: FIGURE 2
+
+# Model of Greetings: Overview of the greeting model
+* Greeting model takes context data as input and produces the appropriate robot posture and speech for that input.
+* The two outputs evaluated by the participants of the experiment through written questionnaires.
+* These training data that we get from the experience are given as feedback to the two mappings.
+
+# Greeting selection system training data
+* Mappings can be trained to  an initial state with data taken from the literature of sociology studies.
+* Training data should be classified through some machine learning method or formula.
+* We decided to use conditional probabilities: in particular the Naive Bayes formula to map data.
+* Naive Bayes only requires a small amount of training data.
+
+# Model of Greetings: Details of training data
+* While training data of gestures can be obtained from the literature, data of words can also be obtained from text corpora.
+* English: English corpora, such as British National Corpus, or the Corpus of Historical American English, are used.
+* Japanese: extracted from data sets by [24, 37, 41-43]. Analyze Corpus on Japanese is difficult.
+
+# Model of Greetings: Location Assumption
+* The location of the experiment was Germany.
+* For this reason, the only dataset needed was the Japanese.
+* As stated in the motivations at the beginning of this paper, the robot should initially behave like a foreigner.
+* ARMAR-IIIb, trained with Japanese data, will have to interact with German people and adapt to their customs.
+
+# Model of Greetings: Mappings and questionnaires
+* The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type.
+* We now need to update these weights.
+
+# feedback from three questionnaires
+* Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not.
+* In the former case, the weights are directly read from the dataset
+* in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier.
+* The output is the chosen greeting, after which the interaction will be evaluated through a questionnaires.
+
+# Model of Greetings: Three questionnaires for feedback
+* answers of questionnaires are five-point semantic differential scale:
+1. How appropriate was the greeting chosen by the robot for the current context?
+2. (If the evaluation at point 1 was <= 3) which greeting type would have been appropriate instead?
+3. (If the evaluation at point 1 was <= 3) which context would have been appropriate, if any, for the greeting type of point 1?
+
+# Model of Greetings: feedback and terminate condition
+* Weights of the affected features are multiplied by a positive or negative reward (inspired by reinforcement learning) which is calculated proportionally to the evaluation.
+* Mappings stop evolving when the following two stopping conditions are satisfied
+* all possible values of all features have been explored
+* and the moving average of the latest 10 state transitions has decreased below a certain threshold.
+
+# Model of Greetings: Summary
+* Thanks to this implementation, mappings can evolve quickly, without requiring hundreds or thousands of iterations
+* but rather a number comparable to the low number of interactions humans need to understand and adapt to social rules.
+
+# TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb)
+
+ 
 <style>
     .slide.cover H2 { font-size: 60px; }
 </style>