10
|
1 <!DOCTYPE HTML>
|
6
|
2
|
10
|
3 <html lang="Japanese">
|
|
4 <head>
|
|
5 <title>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</title>
|
|
6 <meta charset="UTF-8">
|
|
7 <meta name="viewport" content="width=1274, user-scalable=no">
|
|
8 <meta name="generator" content="Slide Show (S9)">
|
|
9 <meta name="author" content="Tatsuki KANAGAWA <br> Yasutaka HIGA">
|
|
10 <link rel="stylesheet" href="themes/ribbon/styles/style.css">
|
|
11 </head>
|
|
12 <body class="list">
|
|
13 <header class="caption">
|
|
14 <h1>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h1>
|
|
15 <p>Tatsuki KANAGAWA <br> Yasutaka HIGA</p>
|
|
16 </header>
|
|
17 <div class="slide cover" id="Cover"><div>
|
|
18 <section>
|
|
19 <header>
|
|
20 <h2>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h2>
|
|
21 <h3 id="author">Tatsuki KANAGAWA <br> Yasutaka HIGA</h3>
|
|
22 <h3 id="profile">Concurrency Reliance Lab</h3>
|
|
23 </header>
|
|
24 </section>
|
|
25 </div></div>
|
6
|
26
|
10
|
27 <!-- todo: add slide.classes to div -->
|
|
28 <!-- todo: create slide id from header? like a slug in blogs? -->
|
6
|
29
|
10
|
30 <div class="slide" id="2"><div>
|
|
31 <section>
|
|
32 <header>
|
|
33 <h1 id="abstract-robots-and-cultures">Abstract: Robots and cultures</h1>
|
|
34 </header>
|
|
35 <!-- === begin markdown block ===
|
|
36
|
|
37 generated by markdown/1.2.0 on Ruby 1.9.3 (2011-10-30) [x86_64-darwin10]
|
11
|
38 on 2015-06-26 09:38:02 +0900 with Markdown engine kramdown (1.7.0)
|
10
|
39 using options {}
|
6
|
40 -->
|
|
41
|
10
|
42 <!-- _S9SLIDE_ -->
|
|
43
|
|
44 <ul>
|
|
45 <li>Robots, especially humanoids, are expected to perform human-like actions and adapt to our ways of communication in order to facilitate their acceptance in human society.</li>
|
|
46 <li>Among humans, rules of communication change depending on background culture.</li>
|
|
47 <li>Greeting are a part of communication in which cultural differences are strong.</li>
|
|
48 </ul>
|
|
49
|
|
50
|
|
51
|
|
52 </section>
|
|
53 </div></div>
|
|
54
|
|
55 <div class="slide" id="3"><div>
|
|
56 <section>
|
|
57 <header>
|
|
58 <h1 id="abstract-summary-of-this-paper">Abstract: Summary of this paper</h1>
|
|
59 </header>
|
|
60 <!-- _S9SLIDE_ -->
|
|
61
|
|
62 <ul>
|
|
63 <li>In this paper, we present the modelling of social factors that influence greeting choice,</li>
|
|
64 <li>and the resulting novel culture-dependent greeting gesture and words selection system.</li>
|
|
65 <li>An experiment with German participants was run using the humanoid robot ARMAR-IIIb.</li>
|
|
66 </ul>
|
6
|
67
|
10
|
68
|
|
69
|
|
70 </section>
|
|
71 </div></div>
|
|
72
|
|
73 <div class="slide" id="4"><div>
|
|
74 <section>
|
|
75 <header>
|
|
76 <h1 id="introduction-acceptance-of-humanoid-robots">Introduction: Acceptance of humanoid robots</h1>
|
|
77 </header>
|
|
78 <!-- _S9SLIDE_ -->
|
|
79
|
|
80 <ul>
|
|
81 <li>Acceptance of humanoid robots in human societies is a critical issue.</li>
|
|
82 <li>One of the main factors is the relations ship between the background culture of human partners and acceptance.
|
|
83 <ul>
|
|
84 <li>ecologies, social structures, philosophies, educational systems.</li>
|
|
85 </ul>
|
|
86 </li>
|
|
87 </ul>
|
|
88
|
|
89
|
|
90
|
|
91 </section>
|
|
92 </div></div>
|
6
|
93
|
10
|
94 <div class="slide" id="5"><div>
|
|
95 <section>
|
|
96 <header>
|
|
97 <h1 id="introduction-culture-adapted-greetings">Introduction: Culture adapted greetings</h1>
|
|
98 </header>
|
|
99 <!-- _S9SLIDE_ -->
|
|
100
|
|
101 <ul>
|
|
102 <li>In the work Trovat et al. culture-dependent acceptance and discomfort relating to greeting gestures were found in a comparative study with Egyptian and Japanese participants.</li>
|
|
103 <li>As the importance of culture-specific customization of greeting was confirmed.</li>
|
|
104 <li>Acceptance of robots can be improved if they are able to adapt to different kinds of greeting rules.</li>
|
|
105 </ul>
|
|
106
|
|
107
|
|
108
|
|
109 </section>
|
|
110 </div></div>
|
|
111
|
|
112 <div class="slide" id="6"><div>
|
|
113 <section>
|
|
114 <header>
|
|
115 <h1 id="introduction-methods-of-implementation-adaptive-behaviour">Introduction: Methods of implementation adaptive behaviour</h1>
|
|
116 </header>
|
|
117 <!-- _S9SLIDE_ -->
|
|
118
|
|
119 <ul>
|
|
120 <li>Adaptive behaviour in robotics can be achieved through various methods:
|
|
121 <ul>
|
|
122 <li>reinforcement learning</li>
|
|
123 <li>neural networks</li>
|
|
124 <li>generic algorithms</li>
|
|
125 <li>function regression</li>
|
|
126 </ul>
|
|
127 </li>
|
|
128 </ul>
|
|
129
|
|
130
|
|
131
|
|
132 </section>
|
|
133 </div></div>
|
|
134
|
|
135 <div class="slide" id="7"><div>
|
|
136 <section>
|
|
137 <header>
|
|
138 <h1 id="introduction-greeting-interaction-with-robots">Introduction: Greeting interaction with robots</h1>
|
|
139 </header>
|
|
140 <!-- _S9SLIDE_ -->
|
|
141
|
|
142 <ul>
|
|
143 <li>Robots are expected to interact and communicate with humans of different cultural background in a natural way.</li>
|
|
144 <li>It is there therefore important to study greeting interaction between robots and humans.
|
|
145 <ul>
|
|
146 <li>ARMAR-III: greeted the Chancellor of Germany with a handshake</li>
|
|
147 <li>ASIMO: is capable of performing a wider range of greetings</li>
|
|
148 <li>(a handshake, waving both hands, and bowing)</li>
|
|
149 </ul>
|
|
150 </li>
|
|
151 </ul>
|
|
152
|
|
153
|
|
154
|
|
155 </section>
|
|
156 </div></div>
|
6
|
157
|
10
|
158 <div class="slide" id="8"><div>
|
|
159 <section>
|
|
160 <header>
|
|
161 <h1 id="introduction-objectives-of-this-paper">Introduction: Objectives of this paper</h1>
|
|
162 </header>
|
|
163 <!-- _S9SLIDE_ -->
|
|
164
|
|
165 <ul>
|
|
166 <li>The robot should be trained with sociology data related to one country, and evolve its behaviour by engaging with people of another country in a small number of interactions.</li>
|
|
167 <li>For the implementation of the gestures and the interaction experiment, we used the humanoid robot ARMAR-IIIb.</li>
|
|
168 <li>As the experiment is carried out in Germany, the interactions are with German participants, while preliminary training is done with Japanese data, which is culturally extremely different.</li>
|
|
169 </ul>
|
|
170
|
|
171
|
|
172
|
|
173 </section>
|
|
174 </div></div>
|
|
175
|
|
176 <div class="slide" id="9"><div>
|
|
177 <section>
|
|
178 <header>
|
|
179 <h1 id="introduction-armar-iiib">Introduction: ARMAR-IIIb</h1>
|
|
180 </header>
|
|
181 <!-- _S9SLIDE_ -->
|
|
182
|
|
183 <p><img src="pictures/ARMAR-IIIb.png" style="width: 350px; height: 350px; margin-left: 200px;" /></p>
|
|
184
|
|
185
|
6
|
186
|
10
|
187 </section>
|
|
188 </div></div>
|
|
189
|
|
190 <div class="slide" id="10"><div>
|
|
191 <section>
|
|
192 <header>
|
|
193 <h1 id="introduction-target-scenario">Introduction: Target scenario</h1>
|
|
194 </header>
|
|
195 <!-- _S9SLIDE_ -->
|
|
196
|
|
197 <ul>
|
|
198 <li>The idea behind this study is a typical scenario in which a foreigner visiting a country for the first time greets local people in an inappropriate way as long as he is unaware of the rules that define the greeting choice.
|
|
199 <ul>
|
|
200 <li>(e.g., a Westerner in Japan)</li>
|
|
201 </ul>
|
|
202 </li>
|
|
203 <li>For example, he might want to shake hands or hug, and will receive a bow instead.</li>
|
|
204 </ul>
|
|
205
|
|
206
|
|
207
|
|
208 </section>
|
|
209 </div></div>
|
|
210
|
|
211 <div class="slide" id="11"><div>
|
|
212 <section>
|
|
213 <header>
|
|
214 <h1 id="introduction-objectives-of-this-work">Introduction: Objectives of this work</h1>
|
|
215 </header>
|
|
216 <!-- _S9SLIDE_ -->
|
6
|
217
|
10
|
218 <ul>
|
|
219 <li>This work is an application of a study of sociology into robotics.</li>
|
|
220 <li>Our contribution is to synthesize the complex and sparse data related to greeting types into a model;</li>
|
|
221 <li>create a selection and adaptation system;</li>
|
|
222 <li>and implement the greetings in a way that can potentially be applied to any robot.</li>
|
|
223 </ul>
|
|
224
|
|
225
|
|
226
|
|
227 </section>
|
|
228 </div></div>
|
|
229
|
|
230 <div class="slide" id="12"><div>
|
|
231 <section>
|
|
232 <header>
|
|
233 <h1 id="greeting-selection-greetings-among-humans">Greeting Selection: Greetings among humans</h1>
|
|
234 </header>
|
|
235 <!-- _S9SLIDE_ -->
|
|
236
|
|
237 <ul>
|
|
238 <li>Greetings are the means of initiating and closing an interaction.</li>
|
|
239 <li>We desire that robots be able to greet people in a similar way to humans.</li>
|
|
240 <li>For this reason, understanding current research on greetings in sociological studies is necessary.</li>
|
|
241 <li>Moreover, depending on cultural background, there can be different rules of engagement in human-human interaction.</li>
|
|
242 </ul>
|
|
243
|
|
244
|
|
245
|
|
246 </section>
|
|
247 </div></div>
|
6
|
248
|
10
|
249 <div class="slide" id="13"><div>
|
|
250 <section>
|
|
251 <header>
|
|
252 <h1 id="greeting-selection-solution-for-selection">Greeting Selection: Solution for selection</h1>
|
|
253 </header>
|
|
254 <!-- _S9SLIDE_ -->
|
|
255
|
|
256 <ul>
|
|
257 <li>A unified model of greetings does not seem to exist in the literature, but a few studies have attempted a classification of greetings.</li>
|
|
258 <li>Some more specific studies have been done on handshaking.</li>
|
|
259 </ul>
|
|
260
|
|
261
|
6
|
262
|
10
|
263 </section>
|
|
264 </div></div>
|
|
265
|
|
266 <div class="slide" id="14"><div>
|
|
267 <section>
|
|
268 <header>
|
|
269 <h1 id="greeting-selection-classes-for-greetings">Greeting Selection: Classes for greetings</h1>
|
|
270 </header>
|
|
271 <!-- _S9SLIDE_ -->
|
|
272
|
|
273 <ul>
|
|
274 <li>A classification of greetings was first attempted by Friedman based on intimacy and commonness.</li>
|
|
275 <li>The following greeting types were mentioned: smile; wave; nod; kiss on mouth; kiss on cheek; hug; handshake; pat on back; rising; bow; salute; and kiss on hand.</li>
|
|
276 <li>Greenbaum et al. also performed a gender-related investigation, while [24] contained a comparative study between Germans and Japanese.</li>
|
|
277 </ul>
|
|
278
|
6
|
279
|
|
280
|
10
|
281 </section>
|
|
282 </div></div>
|
|
283
|
|
284 <div class="slide" id="15"><div>
|
|
285 <section>
|
|
286 <header>
|
|
287 <h1 id="greeting-selection-factors-on-classification">Greeting Selection: Factors on Classification</h1>
|
|
288 </header>
|
|
289 <!-- _S9SLIDE_ -->
|
6
|
290
|
10
|
291 <ul>
|
|
292 <li>‘terms’ : same terms with different meanings, or different terms with the same meaning.</li>
|
|
293 <li>‘location’ : influences intimacy and greeting words. (private or public)</li>
|
|
294 <li>‘intimacy’ : is influenced by physical distance, eye contact, gender, location, and culture. (Social Distance)</li>
|
|
295 <li>‘Time’ : time of the day is important for the choice of words.</li>
|
|
296 <li>‘Politeness’, ‘Power Relationship’, ‘culture’ and more.</li>
|
|
297 </ul>
|
|
298
|
|
299
|
6
|
300
|
10
|
301 </section>
|
|
302 </div></div>
|
|
303
|
|
304 <div class="slide" id="16"><div>
|
|
305 <section>
|
|
306 <header>
|
|
307 <h1 id="greeting-selection-factors-on-classification-1">Greeting Selection: Factors on Classification</h1>
|
|
308 </header>
|
|
309 <!-- _S9SLIDE_ -->
|
6
|
310
|
10
|
311 <ul>
|
|
312 <li>the factors to be cut are greyed out.</li>
|
|
313 </ul>
|
|
314
|
|
315 <p><img src="pictures/factors.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
316
|
|
317
|
|
318
|
|
319 </section>
|
|
320 </div></div>
|
6
|
321
|
10
|
322 <div class="slide" id="17"><div>
|
|
323 <section>
|
|
324 <header>
|
|
325 <h1 id="model-of-greetings-assumptions-1---5">Model of Greetings: Assumptions (1 - 5)</h1>
|
|
326 </header>
|
|
327 <!-- _S9SLIDE_ -->
|
6
|
328
|
10
|
329 <ul>
|
|
330 <li>The simplification was guided by the following ten assumptions.</li>
|
|
331 <li>Only two individuals (a robot and a human participant): we do not take in consideration a higher number of individuals.</li>
|
|
332 <li>Eye contact is taken for granted.</li>
|
|
333 <li>Age is considered part of ‘power relationship’</li>
|
|
334 <li>Regionally is not considered.</li>
|
|
335 <li>Setting is not considered</li>
|
|
336 </ul>
|
|
337
|
|
338
|
|
339
|
|
340 </section>
|
|
341 </div></div>
|
6
|
342
|
10
|
343 <div class="slide" id="18"><div>
|
|
344 <section>
|
|
345 <header>
|
|
346 <h1 id="model-of-greetings-assumptions-6---10">Model of Greetings: Assumptions (6 - 10)</h1>
|
|
347 </header>
|
|
348 <!-- _S9SLIDE_ -->
|
6
|
349
|
10
|
350 <ul>
|
|
351 <li>Physical distance is close enough to allow interaction</li>
|
|
352 <li>Gender is intended to be a same-sex dyad</li>
|
|
353 <li>Affect is considered together with ‘social distance’</li>
|
|
354 <li>Time since the last interaction is partially included in ‘social distance’</li>
|
|
355 <li>Intimacy and politeness are not necessary</li>
|
|
356 </ul>
|
|
357
|
|
358
|
|
359
|
|
360 </section>
|
|
361 </div></div>
|
|
362
|
|
363 <div class="slide" id="19"><div>
|
|
364 <section>
|
|
365 <header>
|
|
366 <h1 id="model-of-greetings-basis-of-classification">Model of Greetings: Basis of classification</h1>
|
|
367 </header>
|
|
368 <!-- _S9SLIDE_ -->
|
6
|
369
|
10
|
370 <ul>
|
|
371 <li>Input
|
|
372 <ul>
|
|
373 <li>All the other factors are then considered features of a mapping problem</li>
|
|
374 <li>They are categorical data, as they can assume only two or three values.</li>
|
|
375 </ul>
|
|
376 </li>
|
|
377 <li>Output
|
|
378 <ul>
|
|
379 <li>The outputs can also assume only a limited set of categorical values.</li>
|
|
380 </ul>
|
|
381 </li>
|
|
382 </ul>
|
|
383
|
|
384
|
|
385
|
|
386 </section>
|
|
387 </div></div>
|
6
|
388
|
10
|
389 <div class="slide" id="20"><div>
|
|
390 <section>
|
|
391 <header>
|
|
392 <h1 id="model-of-greetings-features-mapping-discriminants-classes-and-possible-status">Model of Greetings: Features, mapping discriminants, classes, and possible status</h1>
|
|
393 </header>
|
|
394 <!-- _S9SLIDE_ -->
|
|
395
|
|
396 <p><img src="pictures/classes.png" style="width: 60%; margin-left: 150px;" /></p>
|
|
397
|
|
398
|
|
399
|
|
400 </section>
|
|
401 </div></div>
|
|
402
|
|
403 <div class="slide" id="21"><div>
|
|
404 <section>
|
|
405 <header>
|
|
406 <h1 id="model-of-greetings-overview-of-the-greeting-model">Model of Greetings: Overview of the greeting model</h1>
|
|
407 </header>
|
|
408 <!-- _S9SLIDE_ -->
|
|
409
|
|
410 <ul>
|
|
411 <li>Greeting model takes context data as input and produces the appropriate robot posture and speech for that input.</li>
|
|
412 <li>The two outputs evaluated by the participants of the experiment through written questionnaires.</li>
|
|
413 <li>These training data that we get from the experience are given as feedback to the two mappings.</li>
|
|
414 </ul>
|
|
415
|
6
|
416
|
|
417
|
10
|
418 </section>
|
|
419 </div></div>
|
|
420
|
|
421 <div class="slide" id="22"><div>
|
|
422 <section>
|
|
423 <header>
|
|
424 <h1 id="model-of-greetings-overview-of-the-greeting-model-1">Model of Greetings: Overview of the greeting model</h1>
|
|
425 </header>
|
|
426 <!-- _S9SLIDE_ -->
|
|
427
|
|
428 <p><img src="pictures/model_overview.png" style="width: 75%; margin-left: 120px;" /></p>
|
|
429
|
|
430
|
|
431
|
|
432 </section>
|
|
433 </div></div>
|
|
434
|
|
435 <div class="slide" id="23"><div>
|
|
436 <section>
|
|
437 <header>
|
|
438 <h1 id="greeting-selection-system-training-data">Greeting selection system training data</h1>
|
|
439 </header>
|
|
440 <!-- _S9SLIDE_ -->
|
|
441
|
|
442 <ul>
|
|
443 <li>Mappings can be trained to an initial state with data taken from the literature of sociology studies.</li>
|
|
444 <li>Training data should be classified through some machine learning method or formula.</li>
|
|
445 <li>We decided to use conditional probabilities: in particular the Naive Bayes formula to map data.</li>
|
|
446 <li>Naive Bayes only requires a small amount of training data.</li>
|
|
447 </ul>
|
|
448
|
|
449
|
6
|
450
|
10
|
451 </section>
|
|
452 </div></div>
|
|
453
|
|
454 <div class="slide" id="24"><div>
|
|
455 <section>
|
|
456 <header>
|
|
457 <h1 id="model-of-greetings-details-of-training-data">Model of Greetings: Details of training data</h1>
|
|
458 </header>
|
|
459 <!-- _S9SLIDE_ -->
|
|
460
|
|
461 <ul>
|
|
462 <li>While training data of gestures can be obtained from the literature, data of words can also be obtained from text corpora.</li>
|
|
463 <li>English: English corpora, such as British National Corpus, or the Corpus of Historical American English, are used.</li>
|
|
464 <li>Japanese: extracted from data sets by [24, 37, 41-43]. Analyze Corpus on Japanese is difficult.</li>
|
|
465 </ul>
|
|
466
|
|
467
|
|
468
|
|
469 </section>
|
|
470 </div></div>
|
|
471
|
|
472 <div class="slide" id="25"><div>
|
|
473 <section>
|
|
474 <header>
|
|
475 <h1 id="model-of-greetings-location-assumption">Model of Greetings: Location Assumption</h1>
|
|
476 </header>
|
|
477 <!-- _S9SLIDE_ -->
|
|
478
|
|
479 <ul>
|
|
480 <li>The location of the experiment was Germany.</li>
|
|
481 <li>For this reason, the only dataset needed was the Japanese.</li>
|
|
482 <li>As stated in the motivations at the beginning of this paper, the robot should initially behave like a foreigner.</li>
|
|
483 <li>ARMAR-IIIb, trained with Japanese data, will have to interact with German people and adapt to their customs.</li>
|
|
484 </ul>
|
|
485
|
6
|
486
|
|
487
|
10
|
488 </section>
|
|
489 </div></div>
|
|
490
|
|
491 <div class="slide" id="26"><div>
|
|
492 <section>
|
|
493 <header>
|
|
494 <h1 id="model-of-greetings-mappings-and-questionnaires">Model of Greetings: Mappings and questionnaires</h1>
|
|
495 </header>
|
|
496 <!-- _S9SLIDE_ -->
|
|
497
|
|
498 <ul>
|
|
499 <li>The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type.</li>
|
|
500 <li>We now need to update these weights.</li>
|
|
501 </ul>
|
|
502
|
|
503
|
|
504
|
|
505 </section>
|
|
506 </div></div>
|
|
507
|
|
508 <div class="slide" id="27"><div>
|
|
509 <section>
|
|
510 <header>
|
|
511 <h1 id="feedback-from-three-questionnaires">feedback from three questionnaires</h1>
|
|
512 </header>
|
|
513 <!-- _S9SLIDE_ -->
|
|
514
|
|
515 <ul>
|
|
516 <li>Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not.</li>
|
|
517 <li>In the former case, the weights are directly read from the dataset</li>
|
|
518 <li>in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier.</li>
|
|
519 <li>The output is the chosen greeting, after which the interaction will be evaluated through a questionnaires.</li>
|
|
520 </ul>
|
|
521
|
6
|
522
|
|
523
|
10
|
524 </section>
|
|
525 </div></div>
|
|
526
|
|
527 <div class="slide" id="28"><div>
|
|
528 <section>
|
|
529 <header>
|
|
530 <h1 id="model-of-greetings-three-questionnaires-for-feedback">Model of Greetings: Three questionnaires for feedback</h1>
|
|
531 </header>
|
|
532 <!-- _S9SLIDE_ -->
|
|
533
|
|
534 <ul>
|
|
535 <li>answers of questionnaires are five-point semantic differential scale:
|
|
536 <ol>
|
|
537 <li>How appropriate was the greeting chosen by the robot for the current context?</li>
|
|
538 <li>(If the evaluation at point 1 was <= 3) which greeting type would have been appropriate instead?</li>
|
|
539 <li>(If the evaluation at point 1 was <= 3) which context would have been appropriate, if any, for the greeting type of point 1?</li>
|
|
540 </ol>
|
|
541 </li>
|
|
542 </ul>
|
6
|
543
|
|
544
|
10
|
545
|
|
546 </section>
|
|
547 </div></div>
|
|
548
|
|
549 <div class="slide" id="29"><div>
|
|
550 <section>
|
|
551 <header>
|
|
552 <h1 id="model-of-greetings-feedback-and-terminate-condition">Model of Greetings: feedback and terminate condition</h1>
|
|
553 </header>
|
|
554 <!-- _S9SLIDE_ -->
|
|
555
|
|
556 <ul>
|
|
557 <li>Weights of the affected features are multiplied by a positive or negative reward (inspired by reinforcement learning) which is calculated proportionally to the evaluation.</li>
|
|
558 <li>Mappings stop evolving when the following two stopping conditions are satisfied</li>
|
|
559 <li>all possible values of all features have been explored</li>
|
|
560 <li>and the moving average of the latest 10 state transitions has decreased below a certain threshold.</li>
|
|
561 </ul>
|
|
562
|
6
|
563
|
|
564
|
10
|
565 </section>
|
|
566 </div></div>
|
|
567
|
|
568 <div class="slide" id="30"><div>
|
|
569 <section>
|
|
570 <header>
|
|
571 <h1 id="model-of-greetings-summary">Model of Greetings: Summary</h1>
|
|
572 </header>
|
|
573 <!-- _S9SLIDE_ -->
|
|
574
|
|
575 <ul>
|
|
576 <li>Thanks to this implementation, mappings can evolve quickly, without requiring hundreds or thousands of iterations</li>
|
|
577 <li>but rather a number comparable to the low number of interactions humans need to understand and adapt to social rules.</li>
|
|
578 </ul>
|
|
579
|
|
580
|
|
581
|
|
582 </section>
|
|
583 </div></div>
|
6
|
584
|
10
|
585 <div class="slide" id="31"><div>
|
|
586 <section>
|
|
587 <header>
|
|
588 <h1 id="todo-please-add-slides-over-chapter-3-implementation-of-armar-iiib">TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb)</h1>
|
|
589 </header>
|
|
590 <!-- _S9SLIDE_ -->
|
|
591
|
|
592
|
|
593
|
|
594
|
|
595 </section>
|
|
596 </div></div>
|
|
597
|
|
598 <div class="slide" id="32"><div>
|
|
599 <section>
|
|
600 <header>
|
|
601 <h1 id="implementation-on-armar-iiib">Implementation on ARMAR-IIIb</h1>
|
|
602 </header>
|
|
603 <!-- _S9SLIDE_ -->
|
|
604
|
|
605 <ul>
|
|
606 <li>ARMAR-III is designed for close cooperation with humans</li>
|
|
607 <li>ARMAR-III has a humanlike appearance</li>
|
|
608 <li>sensory capabilities similar to humans</li>
|
|
609 <li>ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands</li>
|
|
610 </ul>
|
|
611
|
6
|
612
|
|
613
|
10
|
614 </section>
|
|
615 </div></div>
|
|
616
|
|
617 <div class="slide" id="33"><div>
|
|
618 <section>
|
|
619 <header>
|
|
620 <h1 id="implementation-of-gestures">Implementation of gestures</h1>
|
|
621 </header>
|
|
622 <!-- _S9SLIDE_ -->
|
|
623
|
|
624 <ul>
|
|
625 <li>The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware</li>
|
|
626 <li>manually defining the patterns of the gestures</li>
|
|
627 <li>Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot</li>
|
|
628 </ul>
|
|
629
|
|
630
|
|
631
|
|
632 </section>
|
|
633 </div></div>
|
|
634
|
|
635 <div class="slide" id="34"><div>
|
|
636 <section>
|
|
637 <header>
|
|
638 <h1 id="master-motor-map">Master Motor Map</h1>
|
|
639 </header>
|
|
640 <!-- _S9SLIDE_ -->
|
|
641
|
|
642 <ul>
|
|
643 <li>The MMM is a reference 3D kinematic model</li>
|
|
644 <li>providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules</li>
|
|
645 <li>This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots</li>
|
|
646 <li>The MMM is intended to become a common standard in the robotics community</li>
|
|
647 </ul>
|
|
648
|
6
|
649
|
|
650
|
10
|
651 </section>
|
|
652 </div></div>
|
|
653
|
|
654 <div class="slide" id="35"><div>
|
|
655 <section>
|
|
656 <header>
|
|
657 <h1 id="master-motor-map-1">Master Motor Map</h1>
|
|
658 </header>
|
|
659 <!-- _S9SLIDE_ -->
|
|
660
|
|
661 <p><img src="pictures/MMM.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
662
|
|
663
|
|
664
|
|
665 </section>
|
|
666 </div></div>
|
|
667
|
|
668 <div class="slide" id="36"><div>
|
|
669 <section>
|
|
670 <header>
|
|
671 <h1 id="master-motor-map-2">Master Motor Map</h1>
|
|
672 </header>
|
|
673 <!-- _S9SLIDE_ -->
|
|
674
|
|
675 <ul>
|
|
676 <li>The body model of MMM model can be seen in the left-hand illustration in Figure</li>
|
|
677 <li>It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots</li>
|
|
678 <li>A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model</li>
|
|
679 </ul>
|
|
680
|
|
681
|
|
682
|
|
683 </section>
|
|
684 </div></div>
|
|
685
|
|
686 <div class="slide" id="37"><div>
|
|
687 <section>
|
|
688 <header>
|
|
689 <h1 id="master-motor-map-3">Master Motor Map</h1>
|
|
690 </header>
|
|
691 <!-- _S9SLIDE_ -->
|
|
692
|
|
693 <p><img src="pictures/MMMModel.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
694
|
6
|
695
|
|
696
|
10
|
697 </section>
|
|
698 </div></div>
|
|
699
|
|
700 <div class="slide" id="38"><div>
|
|
701 <section>
|
|
702 <header>
|
|
703 <h1 id="mmm-support">MMM support</h1>
|
|
704 </header>
|
|
705 <!-- _S9SLIDE_ -->
|
|
706
|
|
707 <ul>
|
|
708 <li>The MMM framework has a high support for every kind of human-like robot</li>
|
|
709 <li>MMM can define the transfer rules</li>
|
|
710 <li>Using the conversion rules, it can be converted from the MMM Model to the movement of the robot</li>
|
|
711 <li>may not be able to convert from MMM model for a specific robot</li>
|
|
712 <li>the motion representation parts of the MMM can be used nevertheless</li>
|
|
713 </ul>
|
|
714
|
|
715
|
|
716
|
|
717 </section>
|
|
718 </div></div>
|
|
719
|
11
|
720 <div class="slide" id="39"><div>
|
10
|
721 <section>
|
|
722 <header>
|
|
723 <h1 id="conversion-example-of-mmm">Conversion example of MMM</h1>
|
|
724 </header>
|
|
725 <!-- _S9SLIDE_ -->
|
|
726
|
|
727 <ul>
|
11
|
728 <li>After programming the motion on the MMM model they were processed by the converter</li>
|
10
|
729 <li>the human model contains many joints, which are not present in the robot configuration</li>
|
|
730 <li>ARMAR is not bending the body when performing a bow</li>
|
|
731 <li>It was expressed using a portion present in the robot (e.g., the neck)</li>
|
|
732 </ul>
|
|
733
|
|
734
|
|
735
|
|
736 </section>
|
|
737 </div></div>
|
|
738
|
11
|
739 <div class="slide" id="40"><div>
|
10
|
740 <section>
|
|
741 <header>
|
|
742 <h1 id="gestureexample">GestureExample</h1>
|
|
743 </header>
|
|
744 <!-- _S9SLIDE_ -->
|
|
745
|
|
746 <p><img src="pictures/GestureExample.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
747
|
6
|
748
|
|
749
|
10
|
750 </section>
|
|
751 </div></div>
|
|
752
|
11
|
753 <div class="slide" id="41"><div>
|
10
|
754 <section>
|
|
755 <header>
|
|
756 <h1 id="implementgesturearmar">ImplementGestureARMARⅢ</h1>
|
|
757 </header>
|
|
758 <!-- _S9SLIDE_ -->
|
|
759
|
|
760 <p><img src="pictures/ImplementGestureARMARⅢ.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
761
|
|
762
|
|
763
|
|
764 </section>
|
|
765 </div></div>
|
|
766
|
11
|
767 <div class="slide" id="42"><div>
|
10
|
768 <section>
|
|
769 <header>
|
|
770 <h1 id="modular-controller-architecture-a-modular-software-framework">Modular Controller Architecture, a modular software framework</h1>
|
|
771 </header>
|
|
772 <!-- _S9SLIDE_ -->
|
|
773
|
|
774 <ul>
|
|
775 <li>The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented</li>
|
|
776 <li>the list of postures is on the left together with the option</li>
|
|
777 <li>When that option is activated, it is possible to select the context parameters through the radio buttons on the right</li>
|
|
778 </ul>
|
|
779
|
|
780
|
|
781
|
|
782 </section>
|
|
783 </div></div>
|
6
|
784
|
11
|
785 <div class="slide" id="43"><div>
|
10
|
786 <section>
|
|
787 <header>
|
|
788 <h1 id="modular-controller-architecture-a-modular-software-framework-1">Modular Controller Architecture, a modular software framework</h1>
|
|
789 </header>
|
|
790 <!-- _S9SLIDE_ -->
|
|
791
|
|
792 <p><img src="pictures/MCA.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
793
|
|
794
|
|
795
|
|
796 </section>
|
|
797 </div></div>
|
|
798
|
11
|
799 <div class="slide" id="44"><div>
|
10
|
800 <section>
|
|
801 <header>
|
|
802 <h1 id="implementation-of-words">Implementation of words</h1>
|
|
803 </header>
|
|
804 <!-- _S9SLIDE_ -->
|
|
805
|
|
806 <ul>
|
|
807 <li>Word of greeting uses two of the Japanese and German</li>
|
|
808 <li>For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」</li>
|
|
809 <li>where a standard greeting like 「konnichi wa」 would be inappropriate</li>
|
|
810 <li>In German, such a greeting type does not exist</li>
|
|
811 <li>but the meaning of “thank you for your effort” at work can be directly translated into German</li>
|
|
812 <li>the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts</li>
|
|
813 </ul>
|
|
814
|
|
815
|
|
816
|
|
817 </section>
|
|
818 </div></div>
|
|
819
|
11
|
820 <div class="slide" id="45"><div>
|
10
|
821 <section>
|
|
822 <header>
|
|
823 <h1 id="table-of-greeting-words">table of greeting words</h1>
|
|
824 </header>
|
|
825 <!-- _S9SLIDE_ -->
|
|
826
|
|
827 <p><img src="pictures/tableofgreetingwords.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
828
|
6
|
829
|
|
830
|
10
|
831 </section>
|
|
832 </div></div>
|
|
833
|
11
|
834 <div class="slide" id="46"><div>
|
10
|
835 <section>
|
|
836 <header>
|
|
837 <h1 id="implementation-of-words-1">Implementation of words</h1>
|
|
838 </header>
|
|
839 <!-- _S9SLIDE_ -->
|
|
840
|
|
841 <ul>
|
|
842 <li>These words have been recorded through free text-to-speech software into wave files that could be played by the robot</li>
|
|
843 <li>ARMAR does not have embedded speakers in its body</li>
|
|
844 <li>added two small speakers behind the head and connected them to another computer</li>
|
|
845 </ul>
|
|
846
|
|
847
|
|
848
|
|
849 </section>
|
|
850 </div></div>
|
|
851
|
11
|
852 <div class="slide" id="47"><div>
|
10
|
853 <section>
|
|
854 <header>
|
|
855 <h1 id="experiment-description">Experiment description</h1>
|
|
856 </header>
|
|
857 <!-- _S9SLIDE_ -->
|
|
858
|
|
859 <ul>
|
|
860 <li>Experiments were conducted at room as shown in Figure , Germany
|
|
861 <img src="pictures/room.png" style="width: 60%; margin-left: 150px; margin-top: 50px;" /></li>
|
|
862 </ul>
|
|
863
|
|
864
|
|
865
|
|
866 </section>
|
|
867 </div></div>
|
|
868
|
11
|
869 <div class="slide" id="48"><div>
|
10
|
870 <section>
|
|
871 <header>
|
|
872 <h1 id="experiment-description2">Experiment description2</h1>
|
|
873 </header>
|
|
874 <!-- _S9SLIDE_ -->
|
|
875
|
|
876 <ul>
|
|
877 <li>Participants were 18 German people of different ages, genders, workplaces</li>
|
|
878 <li>robot could be trained with various combinations of context</li>
|
|
879 <li>It was not possible to include all combinations of feature values in the experiment</li>
|
|
880 <li>for example there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’]</li>
|
|
881 <li>the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home</li>
|
|
882 </ul>
|
|
883
|
6
|
884
|
|
885
|
10
|
886 </section>
|
|
887 </div></div>
|
|
888
|
11
|
889 <div class="slide" id="49"><div>
|
10
|
890 <section>
|
|
891 <header>
|
|
892 <h1 id="experiment-description3">Experiment description3</h1>
|
|
893 </header>
|
|
894 <!-- _S9SLIDE_ -->
|
|
895
|
|
896 <ul>
|
|
897 <li>repeated the experiment more than</li>
|
|
898 <li>for example experiment is repeated at different times</li>
|
|
899 <li>Change the acquaintance from unknown social distance at the time of exchange</li>
|
|
900 <li>we could collect more data by manipulating the value of a single feature</li>
|
|
901 </ul>
|
|
902
|
|
903
|
|
904
|
|
905 </section>
|
|
906 </div></div>
|
|
907
|
11
|
908 <div class="slide" id="50"><div>
|
10
|
909 <section>
|
|
910 <header>
|
|
911 <h1 id="statistics-of-participants">Statistics of participants</h1>
|
|
912 </header>
|
|
913 <!-- _S9SLIDE_ -->
|
|
914
|
|
915 <ul>
|
|
916 <li>The demographics of the 18 participants were as follows
|
|
917 <ol>
|
|
918 <li>gender :M: 10; F: 8</li>
|
|
919 <li>average age: 31.33</li>
|
|
920 <li>age standard deviation:13.16</li>
|
|
921 </ol>
|
|
922 </li>
|
|
923 </ul>
|
|
924
|
6
|
925
|
|
926
|
10
|
927 </section>
|
|
928 </div></div>
|
|
929
|
11
|
930 <div class="slide" id="51"><div>
|
10
|
931 <section>
|
|
932 <header>
|
|
933 <h1 id="tatistics-of-participants">tatistics of participants</h1>
|
|
934 </header>
|
|
935 <!-- _S9SLIDE_ -->
|
|
936
|
|
937 <ul>
|
|
938 <li>the number of interactions was determined by the stopping condition of the algorithm</li>
|
|
939 <li>The number of interactions taking repetitions into account was 30
|
|
940 <ol>
|
|
941 <li>gender :M: 18; F: 12</li>
|
|
942 <li>average age: 29.43</li>
|
|
943 <li>age standard deviation: 12.46</li>
|
|
944 </ol>
|
|
945 </li>
|
|
946 </ul>
|
6
|
947
|
|
948
|
10
|
949
|
|
950 </section>
|
|
951 </div></div>
|
|
952
|
11
|
953 <div class="slide" id="52"><div>
|
10
|
954 <section>
|
|
955 <header>
|
|
956 <h1 id="the-experiment-protocol-is-as-follows-15">The experiment protocol is as follows 1~5</h1>
|
|
957 </header>
|
|
958 <!-- _S9SLIDE_ -->
|
|
959
|
|
960 <ol>
|
|
961 <li>ARMAR-IIIb is trained with Japanese data</li>
|
|
962 <li>encounter are given as inputs to the algorithm and the robot is prepared</li>
|
|
963 <li>Participants entered the room , you are prompted to interact with consideration robot the current situation</li>
|
|
964 <li>The participant enters the room</li>
|
|
965 <li>The robot’s greeting is triggered by an operator as the human participant approaches</li>
|
|
966 </ol>
|
|
967
|
6
|
968
|
|
969
|
10
|
970 </section>
|
|
971 </div></div>
|
|
972
|
11
|
973 <div class="slide" id="53"><div>
|
10
|
974 <section>
|
|
975 <header>
|
|
976 <h1 id="the-experiment-protocol-is-as-follows-610">The experiment protocol is as follows 6~10</h1>
|
|
977 </header>
|
|
978 <!-- _S9SLIDE_ -->
|
|
979
|
|
980 <ol>
|
|
981 <li>After the two parties have greeted each other, the robot is turned off</li>
|
|
982 <li>the participant evaluates the robot’s behaviour through a questionnaire</li>
|
|
983 <li>The mapping is updated using the subject’s feedback</li>
|
|
984 <li>Repeat steps 2–8 for each participant</li>
|
|
985 <li>Training stops after the state changes are stabilized</li>
|
|
986 </ol>
|
|
987
|
|
988
|
|
989
|
|
990 </section>
|
|
991 </div></div>
|
|
992
|
11
|
993 <div class="slide" id="54"><div>
|
10
|
994 <section>
|
|
995 <header>
|
|
996 <h1 id="results">Results</h1>
|
|
997 </header>
|
|
998 <!-- _S9SLIDE_ -->
|
|
999
|
|
1000 <ul>
|
|
1001 <li>It referred to how the change in the gesture of the experiment</li>
|
|
1002 <li>It has become common Bowing is greatly reduced handshake</li>
|
|
1003 <li>It has appeared hug that does not exist in Japan of mapping</li>
|
|
1004 <li>This is because the participants issued a feedback that hug is appropriate</li>
|
|
1005 </ul>
|
|
1006
|
|
1007
|
6
|
1008
|
10
|
1009 </section>
|
|
1010 </div></div>
|
|
1011
|
11
|
1012 <div class="slide" id="55"><div>
|
10
|
1013 <section>
|
|
1014 <header>
|
|
1015 <h1 id="results-1">Results</h1>
|
|
1016 </header>
|
|
1017 <!-- _S9SLIDE_ -->
|
|
1018
|
|
1019 <p><img src="pictures/GestureTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
1020
|
|
1021
|
|
1022
|
|
1023 </section>
|
|
1024 </div></div>
|
|
1025
|
11
|
1026 <div class="slide" id="56"><div>
|
10
|
1027 <section>
|
|
1028 <header>
|
|
1029 <h1 id="results-2">Results</h1>
|
|
1030 </header>
|
|
1031 <!-- _S9SLIDE_ -->
|
|
1032
|
|
1033 <ul>
|
|
1034 <li>The biggest change in the words of the mapping , are gone workplace of greeting</li>
|
|
1035 <li>Is the use of informal greeting as a small amount of change</li>
|
|
1036 </ul>
|
|
1037
|
|
1038
|
|
1039
|
|
1040 </section>
|
|
1041 </div></div>
|
|
1042
|
11
|
1043 <div class="slide" id="57"><div>
|
10
|
1044 <section>
|
|
1045 <header>
|
|
1046 <h1 id="results-3">Results</h1>
|
|
1047 </header>
|
|
1048 <!-- _S9SLIDE_ -->
|
|
1049
|
|
1050 <p><img src="pictures/GreetingWordTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
1051
|
6
|
1052
|
10
|
1053
|
|
1054 </section>
|
|
1055 </div></div>
|
|
1056
|
11
|
1057 <div class="slide" id="58"><div>
|
10
|
1058 <section>
|
|
1059 <header>
|
|
1060 <h1 id="limitations-and-improvements">Limitations and improvements</h1>
|
|
1061 </header>
|
|
1062 <!-- _S9SLIDE_ -->
|
|
1063
|
|
1064 <ul>
|
|
1065 <li>The first obvious limitation is related to the manual input of context data</li>
|
|
1066 <li>The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human</li>
|
|
1067 </ul>
|
|
1068
|
|
1069
|
|
1070
|
|
1071 </section>
|
|
1072 </div></div>
|
|
1073
|
11
|
1074 <div class="slide" id="59"><div>
|
10
|
1075 <section>
|
|
1076 <header>
|
|
1077 <h1 id="limitations-and-improvements-1">Limitations and improvements</h1>
|
|
1078 </header>
|
|
1079 <!-- _S9SLIDE_ -->
|
|
1080
|
|
1081 <ul>
|
11
|
1082 <li>Speech recognition system and cameras could also detect the human own greeting</li>
|
10
|
1083 <li>Robot itself , to determine whether the greeting was correct</li>
|
|
1084 <li>The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected</li>
|
|
1085 </ul>
|
|
1086
|
|
1087
|
|
1088
|
|
1089 </section>
|
|
1090 </div></div>
|
6
|
1091
|
11
|
1092 <div class="slide" id="60"><div>
|
10
|
1093 <section>
|
|
1094 <header>
|
|
1095 <h1 id="limitations-and-improvements-2">Limitations and improvements</h1>
|
|
1096 </header>
|
|
1097 <!-- _S9SLIDE_ -->
|
|
1098
|
|
1099 <ul>
|
|
1100 <li>It is possible to extend the set of context by using a plurality of documents</li>
|
|
1101 </ul>
|
|
1102
|
|
1103
|
|
1104
|
|
1105 </section>
|
|
1106 </div></div>
|
6
|
1107
|
11
|
1108 <div class="slide" id="61"><div>
|
10
|
1109 <section>
|
|
1110 <header>
|
|
1111 <h1 id="different-kinds-of-embodiment">Different kinds of embodiment</h1>
|
|
1112 </header>
|
|
1113 <!-- _S9SLIDE_ -->
|
|
1114
|
|
1115 <ul>
|
|
1116 <li>Humanoid robot has a body similar to the human</li>
|
|
1117 <li>robot can change shape , the size capability</li>
|
|
1118 <li>By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself</li>
|
|
1119 </ul>
|
6
|
1120
|
10
|
1121 <style>
|
|
1122 .slide.cover H2 { font-size: 60px; }
|
|
1123 </style>
|
|
1124
|
|
1125 <!-- vim: set filetype=markdown.slide: -->
|
|
1126 <!-- === end markdown block === -->
|
|
1127
|
|
1128 </section>
|
|
1129 </div></div>
|
|
1130
|
|
1131
|
|
1132 <script src="scripts/script.js"></script>
|
|
1133 <!-- Copyright © 2010–2011 Vadim Makeev, http://pepelsbey.net/ -->
|
6
|
1134 </body>
|
|
1135 </html>
|