view slide.html @ 11:7104d522d2f0

fix
author tatsuki
date Fri, 26 Jun 2015 09:38:20 +0900
parents 62f384a20c2c
children eb59659c9c02
line wrap: on
line source

<!DOCTYPE HTML>

<html lang="Japanese">
<head>
	<title>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</title>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=1274, user-scalable=no">
	<meta name="generator" content="Slide Show (S9)">
	<meta name="author" content="Tatsuki KANAGAWA <br> Yasutaka HIGA">
	<link rel="stylesheet" href="themes/ribbon/styles/style.css">
</head>
<body class="list">
	<header class="caption">
		<h1>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h1>
		<p>Tatsuki KANAGAWA <br> Yasutaka HIGA</p>
	</header>
	<div class="slide cover" id="Cover"><div>
		<section>
			<header>
				<h2>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h2>
				<h3 id="author">Tatsuki KANAGAWA <br> Yasutaka HIGA</h3>
				<h3 id="profile">Concurrency Reliance Lab</h3>
			</header>
		</section>
	</div></div>

<!-- todo: add slide.classes to div -->
<!-- todo: create slide id from header? like a slug in blogs? -->

<div class="slide" id="2"><div>
		<section>
			<header>
				<h1 id="abstract-robots-and-cultures">Abstract: Robots and cultures</h1>
			</header>
			<!-- === begin markdown block ===

      generated by markdown/1.2.0 on Ruby 1.9.3 (2011-10-30) [x86_64-darwin10]
                on 2015-06-26 09:38:02 +0900 with Markdown engine kramdown (1.7.0)
                  using options {}
  -->

<!-- _S9SLIDE_ -->

<ul>
  <li>Robots, especially humanoids, are expected to perform human-like actions and adapt to our ways of communication in order to facilitate their acceptance in human society.</li>
  <li>Among humans, rules of communication change depending on background culture.</li>
  <li>Greeting are a part of communication in which cultural differences are strong.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="3"><div>
		<section>
			<header>
				<h1 id="abstract-summary-of-this-paper">Abstract: Summary of this paper</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>In this paper, we present the modelling of social factors that influence greeting choice,</li>
  <li>and the resulting novel culture-dependent greeting gesture and words selection system.</li>
  <li>An experiment with German participants was run using the humanoid robot ARMAR-IIIb.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="4"><div>
		<section>
			<header>
				<h1 id="introduction-acceptance-of-humanoid-robots">Introduction: Acceptance of humanoid robots</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Acceptance of humanoid robots in human societies is a critical issue.</li>
  <li>One of the main factors is the relations ship between the background culture of human partners and acceptance.
    <ul>
      <li>ecologies, social structures, philosophies, educational systems.</li>
    </ul>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="5"><div>
		<section>
			<header>
				<h1 id="introduction-culture-adapted-greetings">Introduction: Culture adapted greetings</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>In the work Trovat et al. culture-dependent acceptance and discomfort relating to greeting gestures were found in a comparative study with Egyptian and Japanese participants.</li>
  <li>As the importance of culture-specific customization of greeting was confirmed.</li>
  <li>Acceptance of robots can be improved if they are able to adapt to different kinds of greeting rules.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="6"><div>
		<section>
			<header>
				<h1 id="introduction-methods-of-implementation-adaptive-behaviour">Introduction: Methods of implementation adaptive behaviour</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Adaptive behaviour in robotics can be achieved through various methods:
    <ul>
      <li>reinforcement learning</li>
      <li>neural networks</li>
      <li>generic algorithms</li>
      <li>function regression</li>
    </ul>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="7"><div>
		<section>
			<header>
				<h1 id="introduction-greeting-interaction-with-robots">Introduction: Greeting interaction with robots</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Robots are expected to interact and communicate with humans of different cultural background in a natural way.</li>
  <li>It is there therefore important to study greeting interaction between robots and humans.
    <ul>
      <li>ARMAR-III: greeted the Chancellor of Germany with a handshake</li>
      <li>ASIMO: is capable of performing a wider range of greetings</li>
      <li>(a handshake, waving both hands, and bowing)</li>
    </ul>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="8"><div>
		<section>
			<header>
				<h1 id="introduction-objectives-of-this-paper">Introduction: Objectives of this paper</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The robot should be trained with sociology data related to one country, and evolve its behaviour by engaging with people of another country in a small number of interactions.</li>
  <li>For the implementation of the gestures and the interaction experiment, we used the humanoid robot ARMAR-IIIb.</li>
  <li>As the experiment is carried out in Germany, the interactions are with German participants, while preliminary training is done with Japanese data, which is culturally extremely different.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="9"><div>
		<section>
			<header>
				<h1 id="introduction-armar-iiib">Introduction: ARMAR-IIIb</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/ARMAR-IIIb.png" style="width: 350px; height: 350px; margin-left: 200px;" /></p>



		</section>
</div></div>

<div class="slide" id="10"><div>
		<section>
			<header>
				<h1 id="introduction-target-scenario">Introduction: Target scenario</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The idea behind this study is a typical scenario in which a foreigner visiting a country for the first time greets local people in an inappropriate way as long as he is unaware of the rules that define the greeting choice.
    <ul>
      <li>(e.g., a Westerner in Japan)</li>
    </ul>
  </li>
  <li>For example, he might want to shake hands or hug, and will receive a bow instead.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="11"><div>
		<section>
			<header>
				<h1 id="introduction-objectives-of-this-work">Introduction: Objectives of this work</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>This work is an application of a study of sociology into robotics.</li>
  <li>Our contribution is to synthesize the complex and sparse data related to greeting types into a model;</li>
  <li>create a selection and adaptation system;</li>
  <li>and implement the greetings in a way that can potentially be applied to any robot.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="12"><div>
		<section>
			<header>
				<h1 id="greeting-selection-greetings-among-humans">Greeting Selection: Greetings among humans</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Greetings are the means of initiating and closing an interaction.</li>
  <li>We desire that robots be able to greet people in a similar way to humans.</li>
  <li>For this reason, understanding current research on greetings in sociological studies is necessary.</li>
  <li>Moreover, depending on cultural background, there can be different rules of engagement in human-human interaction.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="13"><div>
		<section>
			<header>
				<h1 id="greeting-selection-solution-for-selection">Greeting Selection: Solution for selection</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>A unified model of greetings does not seem to exist in the literature, but a few studies have attempted a classification of greetings.</li>
  <li>Some more specific studies have been done on handshaking.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="14"><div>
		<section>
			<header>
				<h1 id="greeting-selection-classes-for-greetings">Greeting Selection: Classes for greetings</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>A classification of greetings was first attempted by Friedman based on intimacy and commonness.</li>
  <li>The following greeting types were mentioned: smile; wave; nod; kiss on mouth; kiss on cheek; hug; handshake; pat on back; rising; bow; salute; and kiss on hand.</li>
  <li>Greenbaum et al. also performed a gender-related investigation, while [24] contained a comparative study between Germans and Japanese.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="15"><div>
		<section>
			<header>
				<h1 id="greeting-selection-factors-on-classification">Greeting Selection: Factors on Classification</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>‘terms’  : same terms with different meanings, or different terms with the same meaning.</li>
  <li>‘location’ : influences intimacy and greeting words. (private or public)</li>
  <li>‘intimacy’ : is influenced by physical distance, eye contact, gender, location, and culture. (Social Distance)</li>
  <li>‘Time’ : time of the day is important for the choice of words.</li>
  <li>‘Politeness’, ‘Power Relationship’, ‘culture’ and more.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="16"><div>
		<section>
			<header>
				<h1 id="greeting-selection-factors-on-classification-1">Greeting Selection: Factors on Classification</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>the factors to be cut are greyed out.</li>
</ul>

<p><img src="pictures/factors.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="17"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-assumptions-1---5">Model of Greetings: Assumptions (1 - 5)</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The simplification was guided by the following ten assumptions.</li>
  <li>Only two individuals (a robot and a human participant): we do not take in consideration a higher number of individuals.</li>
  <li>Eye contact is taken for granted.</li>
  <li>Age is considered part of ‘power relationship’</li>
  <li>Regionally is not considered.</li>
  <li>Setting is not considered</li>
</ul>



		</section>
</div></div>

<div class="slide" id="18"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-assumptions-6---10">Model of Greetings: Assumptions (6 - 10)</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Physical distance is close enough to allow interaction</li>
  <li>Gender is intended to be a same-sex dyad</li>
  <li>Affect is considered together with ‘social distance’</li>
  <li>Time since the last interaction is partially included in ‘social distance’</li>
  <li>Intimacy and politeness are not necessary</li>
</ul>



		</section>
</div></div>

<div class="slide" id="19"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-basis-of-classification">Model of Greetings: Basis of classification</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Input
    <ul>
      <li>All the other factors are then considered features of a mapping problem</li>
      <li>They are categorical data, as they can assume only two or three values.</li>
    </ul>
  </li>
  <li>Output
    <ul>
      <li>The outputs can also assume only a limited set of categorical values.</li>
    </ul>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="20"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-features-mapping-discriminants-classes-and-possible-status">Model of Greetings: Features, mapping discriminants, classes, and possible status</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/classes.png" style="width: 60%; margin-left: 150px;" /></p>



		</section>
</div></div>

<div class="slide" id="21"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-overview-of-the-greeting-model">Model of Greetings: Overview of the greeting model</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Greeting model takes context data as input and produces the appropriate robot posture and speech for that input.</li>
  <li>The two outputs evaluated by the participants of the experiment through written questionnaires.</li>
  <li>These training data that we get from the experience are given as feedback to the two mappings.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="22"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-overview-of-the-greeting-model-1">Model of Greetings: Overview of the greeting model</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/model_overview.png" style="width: 75%; margin-left: 120px;" /></p>



		</section>
</div></div>

<div class="slide" id="23"><div>
		<section>
			<header>
				<h1 id="greeting-selection-system-training-data">Greeting selection system training data</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Mappings can be trained to  an initial state with data taken from the literature of sociology studies.</li>
  <li>Training data should be classified through some machine learning method or formula.</li>
  <li>We decided to use conditional probabilities: in particular the Naive Bayes formula to map data.</li>
  <li>Naive Bayes only requires a small amount of training data.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="24"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-details-of-training-data">Model of Greetings: Details of training data</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>While training data of gestures can be obtained from the literature, data of words can also be obtained from text corpora.</li>
  <li>English: English corpora, such as British National Corpus, or the Corpus of Historical American English, are used.</li>
  <li>Japanese: extracted from data sets by [24, 37, 41-43]. Analyze Corpus on Japanese is difficult.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="25"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-location-assumption">Model of Greetings: Location Assumption</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The location of the experiment was Germany.</li>
  <li>For this reason, the only dataset needed was the Japanese.</li>
  <li>As stated in the motivations at the beginning of this paper, the robot should initially behave like a foreigner.</li>
  <li>ARMAR-IIIb, trained with Japanese data, will have to interact with German people and adapt to their customs.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="26"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-mappings-and-questionnaires">Model of Greetings: Mappings and questionnaires</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type.</li>
  <li>We now need to update these weights.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="27"><div>
		<section>
			<header>
				<h1 id="feedback-from-three-questionnaires">feedback from three questionnaires</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not.</li>
  <li>In the former case, the weights are directly read from the dataset</li>
  <li>in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier.</li>
  <li>The output is the chosen greeting, after which the interaction will be evaluated through a questionnaires.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="28"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-three-questionnaires-for-feedback">Model of Greetings: Three questionnaires for feedback</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>answers of questionnaires are five-point semantic differential scale:
    <ol>
      <li>How appropriate was the greeting chosen by the robot for the current context?</li>
      <li>(If the evaluation at point 1 was &lt;= 3) which greeting type would have been appropriate instead?</li>
      <li>(If the evaluation at point 1 was &lt;= 3) which context would have been appropriate, if any, for the greeting type of point 1?</li>
    </ol>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="29"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-feedback-and-terminate-condition">Model of Greetings: feedback and terminate condition</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Weights of the affected features are multiplied by a positive or negative reward (inspired by reinforcement learning) which is calculated proportionally to the evaluation.</li>
  <li>Mappings stop evolving when the following two stopping conditions are satisfied</li>
  <li>all possible values of all features have been explored</li>
  <li>and the moving average of the latest 10 state transitions has decreased below a certain threshold.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="30"><div>
		<section>
			<header>
				<h1 id="model-of-greetings-summary">Model of Greetings: Summary</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Thanks to this implementation, mappings can evolve quickly, without requiring hundreds or thousands of iterations</li>
  <li>but rather a number comparable to the low number of interactions humans need to understand and adapt to social rules.</li>
</ul>



		</section>
</div></div>

<div class="slide" id="31"><div>
		<section>
			<header>
				<h1 id="todo-please-add-slides-over-chapter-3-implementation-of-armar-iiib">TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb)</h1>
			</header>
			<!-- _S9SLIDE_ -->




		</section>
</div></div>

<div class="slide" id="32"><div>
		<section>
			<header>
				<h1 id="implementation-on-armar-iiib">Implementation on ARMAR-IIIb</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>ARMAR-III is designed for close cooperation with humans</li>
  <li>ARMAR-III has a humanlike appearance</li>
  <li>sensory capabilities similar to humans</li>
  <li>ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands</li>
</ul>



		</section>
</div></div>

<div class="slide" id="33"><div>
		<section>
			<header>
				<h1 id="implementation-of-gestures">Implementation of gestures</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware</li>
  <li>manually defining the patterns of the gestures</li>
  <li>Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot</li>
</ul>



		</section>
</div></div>

<div class="slide" id="34"><div>
		<section>
			<header>
				<h1 id="master-motor-map">Master Motor Map</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The MMM is a reference 3D kinematic model</li>
  <li>providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules</li>
  <li>This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots</li>
  <li>The MMM is intended to become a common standard in the robotics community</li>
</ul>



		</section>
</div></div>

<div class="slide" id="35"><div>
		<section>
			<header>
				<h1 id="master-motor-map-1">Master Motor Map</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/MMM.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="36"><div>
		<section>
			<header>
				<h1 id="master-motor-map-2">Master Motor Map</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The body model of MMM  model can be seen in the left-hand illustration in Figure</li>
  <li>It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots</li>
  <li>A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model</li>
</ul>



		</section>
</div></div>

<div class="slide" id="37"><div>
		<section>
			<header>
				<h1 id="master-motor-map-3">Master Motor Map</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/MMMModel.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="38"><div>
		<section>
			<header>
				<h1 id="mmm-support">MMM support</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The MMM framework has a high support for every kind of human-like robot</li>
  <li>MMM can define the transfer rules</li>
  <li>Using the conversion rules, it can be converted from the MMM Model to the movement of the robot</li>
  <li>may not be able to convert from MMM model for a specific robot</li>
  <li>the motion representation parts of the MMM can be used nevertheless</li>
</ul>



		</section>
</div></div>

<div class="slide" id="39"><div>
		<section>
			<header>
				<h1 id="conversion-example-of-mmm">Conversion example of MMM</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>After programming the motion on the MMM model  they were processed by the converter</li>
  <li>the human model contains many joints, which are not present in the robot configuration</li>
  <li>ARMAR is not bending the body when performing a bow</li>
  <li>It was expressed using a portion present in the robot (e.g., the neck)</li>
</ul>



		</section>
</div></div>

<div class="slide" id="40"><div>
		<section>
			<header>
				<h1 id="gestureexample">GestureExample</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/GestureExample.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="41"><div>
		<section>
			<header>
				<h1 id="implementgesturearmar">ImplementGestureARMARⅢ</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/ImplementGestureARMARⅢ.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="42"><div>
		<section>
			<header>
				<h1 id="modular-controller-architecture-a-modular-software-framework">Modular Controller Architecture, a modular software framework</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented</li>
  <li>the list of postures is on the left together with the option</li>
  <li>When that option is activated, it is possible to select the context parameters through the radio buttons on the right</li>
</ul>



		</section>
</div></div>

<div class="slide" id="43"><div>
		<section>
			<header>
				<h1 id="modular-controller-architecture-a-modular-software-framework-1">Modular Controller Architecture, a modular software framework</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/MCA.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="44"><div>
		<section>
			<header>
				<h1 id="implementation-of-words">Implementation of words</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Word of greeting uses two of the Japanese and German</li>
  <li>For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」</li>
  <li>where a standard greeting like 「konnichi wa」 would be inappropriate</li>
  <li>In German, such a greeting type does not exist</li>
  <li>but the meaning of “thank you for your effort” at work can be directly translated into German</li>
  <li>the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts</li>
</ul>



		</section>
</div></div>

<div class="slide" id="45"><div>
		<section>
			<header>
				<h1 id="table-of-greeting-words">table of greeting words</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/tableofgreetingwords.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="46"><div>
		<section>
			<header>
				<h1 id="implementation-of-words-1">Implementation of words</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>These words have been recorded through free text-to-speech software into wave files that could be played by the robot</li>
  <li>ARMAR does not have embedded speakers in its body</li>
  <li>added two small speakers behind the head and connected them to another computer</li>
</ul>



		</section>
</div></div>

<div class="slide" id="47"><div>
		<section>
			<header>
				<h1 id="experiment-description">Experiment description</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Experiments were conducted at room as shown in Figure , Germany
<img src="pictures/room.png" style="width: 60%; margin-left: 150px; margin-top: 50px;" /></li>
</ul>



		</section>
</div></div>

<div class="slide" id="48"><div>
		<section>
			<header>
				<h1 id="experiment-description2">Experiment description2</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Participants were 18 German people of different ages, genders, workplaces</li>
  <li>robot could be trained with various combinations of context</li>
  <li>It was not possible to include all combinations of feature values in the experiment</li>
  <li>for example  there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’]</li>
  <li>the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home</li>
</ul>



		</section>
</div></div>

<div class="slide" id="49"><div>
		<section>
			<header>
				<h1 id="experiment-description3">Experiment description3</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>repeated the experiment more than</li>
  <li>for example  experiment is repeated at different times</li>
  <li>Change the acquaintance from unknown social distance at the time of exchange</li>
  <li>we could collect more data by manipulating the value of a single feature</li>
</ul>



		</section>
</div></div>

<div class="slide" id="50"><div>
		<section>
			<header>
				<h1 id="statistics-of-participants">Statistics of participants</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The demographics of the 18 participants were as follows
    <ol>
      <li>gender :M: 10; F: 8</li>
      <li>average age: 31.33</li>
      <li>age standard deviation:13.16</li>
    </ol>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="51"><div>
		<section>
			<header>
				<h1 id="tatistics-of-participants">tatistics of participants</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>the number of interactions was determined by the stopping condition of the algorithm</li>
  <li>The number of interactions taking repetitions into account was 30
    <ol>
      <li>gender :M: 18; F: 12</li>
      <li>average age: 29.43</li>
      <li>age standard deviation: 12.46</li>
    </ol>
  </li>
</ul>



		</section>
</div></div>

<div class="slide" id="52"><div>
		<section>
			<header>
				<h1 id="the-experiment-protocol-is-as-follows-15">The experiment protocol is as follows 1~5</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ol>
  <li>ARMAR-IIIb is trained with Japanese data</li>
  <li>encounter are given as inputs to the algorithm and the robot is prepared</li>
  <li>Participants entered the room , you are prompted to interact with consideration robot the current situation</li>
  <li>The participant enters the room</li>
  <li>The robot’s greeting is triggered by an operator as the human participant approaches</li>
</ol>



		</section>
</div></div>

<div class="slide" id="53"><div>
		<section>
			<header>
				<h1 id="the-experiment-protocol-is-as-follows-610">The experiment protocol is as follows 6~10</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ol>
  <li>After the two parties have greeted each other, the robot is turned off</li>
  <li>the participant evaluates the robot’s behaviour through a questionnaire</li>
  <li>The mapping is updated using the subject’s feedback</li>
  <li>Repeat steps 2–8 for each participant</li>
  <li>Training stops after the state changes are stabilized</li>
</ol>



		</section>
</div></div>

<div class="slide" id="54"><div>
		<section>
			<header>
				<h1 id="results">Results</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>It referred to how the change in the gesture of the experiment</li>
  <li>It has become common Bowing is greatly reduced handshake</li>
  <li>It has appeared hug that does not exist in Japan of mapping</li>
  <li>This is because the participants issued a feedback that hug is appropriate</li>
</ul>



		</section>
</div></div>

<div class="slide" id="55"><div>
		<section>
			<header>
				<h1 id="results-1">Results</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/GestureTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="56"><div>
		<section>
			<header>
				<h1 id="results-2">Results</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The biggest change in the words of the mapping , are gone workplace of greeting</li>
  <li>Is the use of informal greeting as a small amount of change</li>
</ul>



		</section>
</div></div>

<div class="slide" id="57"><div>
		<section>
			<header>
				<h1 id="results-3">Results</h1>
			</header>
			<!-- _S9SLIDE_ -->

<p><img src="pictures/GreetingWordTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>



		</section>
</div></div>

<div class="slide" id="58"><div>
		<section>
			<header>
				<h1 id="limitations-and-improvements">Limitations and improvements</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>The first obvious limitation is related to the manual input of context data</li>
  <li>The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human</li>
</ul>



		</section>
</div></div>

<div class="slide" id="59"><div>
		<section>
			<header>
				<h1 id="limitations-and-improvements-1">Limitations and improvements</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Speech recognition system and cameras could also detect the human own greeting</li>
  <li>Robot itself , to determine whether the greeting was correct</li>
  <li>The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected</li>
</ul>



		</section>
</div></div>

<div class="slide" id="60"><div>
		<section>
			<header>
				<h1 id="limitations-and-improvements-2">Limitations and improvements</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>It is possible to extend the set of context by using a plurality of documents</li>
</ul>



		</section>
</div></div>

<div class="slide" id="61"><div>
		<section>
			<header>
				<h1 id="different-kinds-of-embodiment">Different kinds of embodiment</h1>
			</header>
			<!-- _S9SLIDE_ -->

<ul>
  <li>Humanoid robot has a body similar to the human</li>
  <li>robot can change shape , the size capability</li>
  <li>By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself</li>
</ul>

<style>
    .slide.cover H2 { font-size: 60px; }
</style>

<!-- vim: set filetype=markdown.slide: -->
<!-- === end markdown block === -->

		</section>
</div></div>


	<script src="scripts/script.js"></script>
	<!-- Copyright © 2010–2011 Vadim Makeev, http://pepelsbey.net/ -->
</body>
</html>