10
|
1 <!DOCTYPE HTML>
|
6
|
2
|
10
|
3 <html lang="Japanese">
|
|
4 <head>
|
|
5 <title>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</title>
|
|
6 <meta charset="UTF-8">
|
|
7 <meta name="viewport" content="width=1274, user-scalable=no">
|
|
8 <meta name="generator" content="Slide Show (S9)">
|
|
9 <meta name="author" content="Tatsuki KANAGAWA <br> Yasutaka HIGA">
|
|
10 <link rel="stylesheet" href="themes/ribbon/styles/style.css">
|
13
|
11 <link rel="stylesheet" href="slide.css">
|
10
|
12 </head>
|
|
13 <body class="list">
|
|
14 <header class="caption">
|
|
15 <h1>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h1>
|
|
16 <p>Tatsuki KANAGAWA <br> Yasutaka HIGA</p>
|
|
17 </header>
|
|
18 <div class="slide cover" id="Cover"><div>
|
|
19 <section>
|
|
20 <header>
|
|
21 <h2>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h2>
|
|
22 <h3 id="author">Tatsuki KANAGAWA <br> Yasutaka HIGA</h3>
|
|
23 <h3 id="profile">Concurrency Reliance Lab</h3>
|
|
24 </header>
|
|
25 </section>
|
|
26 </div></div>
|
6
|
27
|
10
|
28 <!-- todo: add slide.classes to div -->
|
|
29 <!-- todo: create slide id from header? like a slug in blogs? -->
|
6
|
30
|
10
|
31 <div class="slide" id="2"><div>
|
|
32 <section>
|
|
33 <header>
|
|
34 <h1 id="abstract-robots-and-cultures">Abstract: Robots and cultures</h1>
|
|
35 </header>
|
|
36 <!-- === begin markdown block ===
|
|
37
|
13
|
38 generated by markdown/1.2.0 on Ruby 2.2.2 (2015-04-13) [x86_64-darwin14]
|
|
39 on 2015-06-26 10:02:51 +0900 with Markdown engine kramdown (1.7.0)
|
10
|
40 using options {}
|
6
|
41 -->
|
|
42
|
10
|
43 <!-- _S9SLIDE_ -->
|
|
44
|
|
45 <ul>
|
|
46 <li>Robots, especially humanoids, are expected to perform human-like actions and adapt to our ways of communication in order to facilitate their acceptance in human society.</li>
|
|
47 <li>Among humans, rules of communication change depending on background culture.</li>
|
|
48 <li>Greeting are a part of communication in which cultural differences are strong.</li>
|
|
49 </ul>
|
|
50
|
|
51
|
|
52
|
|
53 </section>
|
|
54 </div></div>
|
|
55
|
|
56 <div class="slide" id="3"><div>
|
|
57 <section>
|
|
58 <header>
|
|
59 <h1 id="abstract-summary-of-this-paper">Abstract: Summary of this paper</h1>
|
|
60 </header>
|
|
61 <!-- _S9SLIDE_ -->
|
|
62
|
|
63 <ul>
|
|
64 <li>In this paper, we present the modelling of social factors that influence greeting choice,</li>
|
|
65 <li>and the resulting novel culture-dependent greeting gesture and words selection system.</li>
|
|
66 <li>An experiment with German participants was run using the humanoid robot ARMAR-IIIb.</li>
|
|
67 </ul>
|
6
|
68
|
10
|
69
|
|
70
|
|
71 </section>
|
|
72 </div></div>
|
|
73
|
|
74 <div class="slide" id="4"><div>
|
|
75 <section>
|
|
76 <header>
|
|
77 <h1 id="introduction-acceptance-of-humanoid-robots">Introduction: Acceptance of humanoid robots</h1>
|
|
78 </header>
|
|
79 <!-- _S9SLIDE_ -->
|
|
80
|
|
81 <ul>
|
|
82 <li>Acceptance of humanoid robots in human societies is a critical issue.</li>
|
|
83 <li>One of the main factors is the relations ship between the background culture of human partners and acceptance.
|
|
84 <ul>
|
|
85 <li>ecologies, social structures, philosophies, educational systems.</li>
|
|
86 </ul>
|
|
87 </li>
|
|
88 </ul>
|
|
89
|
|
90
|
|
91
|
|
92 </section>
|
|
93 </div></div>
|
6
|
94
|
10
|
95 <div class="slide" id="5"><div>
|
|
96 <section>
|
|
97 <header>
|
|
98 <h1 id="introduction-culture-adapted-greetings">Introduction: Culture adapted greetings</h1>
|
|
99 </header>
|
|
100 <!-- _S9SLIDE_ -->
|
|
101
|
|
102 <ul>
|
|
103 <li>In the work Trovat et al. culture-dependent acceptance and discomfort relating to greeting gestures were found in a comparative study with Egyptian and Japanese participants.</li>
|
|
104 <li>As the importance of culture-specific customization of greeting was confirmed.</li>
|
|
105 <li>Acceptance of robots can be improved if they are able to adapt to different kinds of greeting rules.</li>
|
|
106 </ul>
|
|
107
|
|
108
|
|
109
|
|
110 </section>
|
|
111 </div></div>
|
|
112
|
|
113 <div class="slide" id="6"><div>
|
|
114 <section>
|
|
115 <header>
|
|
116 <h1 id="introduction-greeting-interaction-with-robots">Introduction: Greeting interaction with robots</h1>
|
|
117 </header>
|
|
118 <!-- _S9SLIDE_ -->
|
|
119
|
|
120 <ul>
|
|
121 <li>Robots are expected to interact and communicate with humans of different cultural background in a natural way.</li>
|
|
122 <li>It is there therefore important to study greeting interaction between robots and humans.
|
|
123 <ul>
|
|
124 <li>ARMAR-III: greeted the Chancellor of Germany with a handshake</li>
|
|
125 <li>ASIMO: is capable of performing a wider range of greetings</li>
|
|
126 <li>(a handshake, waving both hands, and bowing)</li>
|
|
127 </ul>
|
|
128 </li>
|
|
129 </ul>
|
|
130
|
|
131
|
|
132
|
|
133 </section>
|
|
134 </div></div>
|
6
|
135
|
13
|
136 <div class="slide" id="7"><div>
|
10
|
137 <section>
|
|
138 <header>
|
|
139 <h1 id="introduction-objectives-of-this-paper">Introduction: Objectives of this paper</h1>
|
|
140 </header>
|
|
141 <!-- _S9SLIDE_ -->
|
|
142
|
|
143 <ul>
|
|
144 <li>The robot should be trained with sociology data related to one country, and evolve its behaviour by engaging with people of another country in a small number of interactions.</li>
|
|
145 <li>As the experiment is carried out in Germany, the interactions are with German participants, while preliminary training is done with Japanese data, which is culturally extremely different.</li>
|
|
146 </ul>
|
|
147
|
|
148
|
|
149
|
|
150 </section>
|
|
151 </div></div>
|
|
152
|
13
|
153 <div class="slide" id="8"><div>
|
10
|
154 <section>
|
|
155 <header>
|
|
156 <h1 id="introduction-armar-iiib">Introduction: ARMAR-IIIb</h1>
|
|
157 </header>
|
|
158 <!-- _S9SLIDE_ -->
|
|
159
|
|
160 <p><img src="pictures/ARMAR-IIIb.png" style="width: 350px; height: 350px; margin-left: 200px;" /></p>
|
|
161
|
|
162
|
6
|
163
|
10
|
164 </section>
|
|
165 </div></div>
|
|
166
|
13
|
167 <div class="slide" id="9"><div>
|
10
|
168 <section>
|
|
169 <header>
|
|
170 <h1 id="introduction-target-scenario">Introduction: Target scenario</h1>
|
|
171 </header>
|
|
172 <!-- _S9SLIDE_ -->
|
|
173
|
|
174 <ul>
|
|
175 <li>The idea behind this study is a typical scenario in which a foreigner visiting a country for the first time greets local people in an inappropriate way as long as he is unaware of the rules that define the greeting choice.
|
|
176 <ul>
|
|
177 <li>(e.g., a Westerner in Japan)</li>
|
|
178 </ul>
|
|
179 </li>
|
|
180 <li>For example, he might want to shake hands or hug, and will receive a bow instead.</li>
|
|
181 </ul>
|
|
182
|
|
183
|
|
184
|
|
185 </section>
|
|
186 </div></div>
|
|
187
|
13
|
188 <div class="slide" id="10"><div>
|
10
|
189 <section>
|
|
190 <header>
|
|
191 <h1 id="introduction-objectives-of-this-work">Introduction: Objectives of this work</h1>
|
|
192 </header>
|
|
193 <!-- _S9SLIDE_ -->
|
6
|
194
|
10
|
195 <ul>
|
|
196 <li>This work is an application of a study of sociology into robotics.</li>
|
|
197 <li>Our contribution is to synthesize the complex and sparse data related to greeting types into a model;</li>
|
|
198 <li>create a selection and adaptation system;</li>
|
|
199 <li>and implement the greetings in a way that can potentially be applied to any robot.</li>
|
|
200 </ul>
|
|
201
|
|
202
|
|
203
|
|
204 </section>
|
|
205 </div></div>
|
|
206
|
13
|
207 <div class="slide" id="11"><div>
|
10
|
208 <section>
|
|
209 <header>
|
|
210 <h1 id="greeting-selection-greetings-among-humans">Greeting Selection: Greetings among humans</h1>
|
|
211 </header>
|
|
212 <!-- _S9SLIDE_ -->
|
|
213
|
|
214 <ul>
|
|
215 <li>Greetings are the means of initiating and closing an interaction.</li>
|
|
216 <li>We desire that robots be able to greet people in a similar way to humans.</li>
|
|
217 <li>For this reason, understanding current research on greetings in sociological studies is necessary.</li>
|
|
218 <li>Moreover, depending on cultural background, there can be different rules of engagement in human-human interaction.</li>
|
|
219 </ul>
|
|
220
|
|
221
|
|
222
|
|
223 </section>
|
|
224 </div></div>
|
6
|
225
|
13
|
226 <div class="slide" id="12"><div>
|
10
|
227 <section>
|
|
228 <header>
|
|
229 <h1 id="greeting-selection-solution-for-selection">Greeting Selection: Solution for selection</h1>
|
|
230 </header>
|
|
231 <!-- _S9SLIDE_ -->
|
|
232
|
|
233 <ul>
|
|
234 <li>A unified model of greetings does not seem to exist in the literature, but a few studies have attempted a classification of greetings.</li>
|
|
235 <li>Some more specific studies have been done on handshaking.</li>
|
|
236 </ul>
|
|
237
|
|
238
|
6
|
239
|
10
|
240 </section>
|
|
241 </div></div>
|
|
242
|
13
|
243 <div class="slide" id="13"><div>
|
10
|
244 <section>
|
|
245 <header>
|
|
246 <h1 id="greeting-selection-classes-for-greetings">Greeting Selection: Classes for greetings</h1>
|
|
247 </header>
|
|
248 <!-- _S9SLIDE_ -->
|
|
249
|
|
250 <ul>
|
|
251 <li>A classification of greetings was first attempted by Friedman based on intimacy and commonness.</li>
|
|
252 <li>The following greeting types were mentioned: smile; wave; nod; kiss on mouth; kiss on cheek; hug; handshake; pat on back; rising; bow; salute; and kiss on hand.</li>
|
|
253 <li>Greenbaum et al. also performed a gender-related investigation, while [24] contained a comparative study between Germans and Japanese.</li>
|
|
254 </ul>
|
|
255
|
6
|
256
|
|
257
|
10
|
258 </section>
|
|
259 </div></div>
|
|
260
|
13
|
261 <div class="slide" id="14"><div>
|
10
|
262 <section>
|
|
263 <header>
|
|
264 <h1 id="greeting-selection-factors-on-classification">Greeting Selection: Factors on Classification</h1>
|
|
265 </header>
|
|
266 <!-- _S9SLIDE_ -->
|
6
|
267
|
10
|
268 <ul>
|
|
269 <li>‘terms’ : same terms with different meanings, or different terms with the same meaning.</li>
|
|
270 <li>‘location’ : influences intimacy and greeting words. (private or public)</li>
|
|
271 <li>‘intimacy’ : is influenced by physical distance, eye contact, gender, location, and culture. (Social Distance)</li>
|
|
272 <li>‘Time’ : time of the day is important for the choice of words.</li>
|
|
273 <li>‘Politeness’, ‘Power Relationship’, ‘culture’ and more.</li>
|
|
274 </ul>
|
|
275
|
|
276
|
6
|
277
|
10
|
278 </section>
|
|
279 </div></div>
|
|
280
|
13
|
281 <div class="slide" id="15"><div>
|
10
|
282 <section>
|
|
283 <header>
|
|
284 <h1 id="greeting-selection-factors-on-classification-1">Greeting Selection: Factors on Classification</h1>
|
|
285 </header>
|
|
286 <!-- _S9SLIDE_ -->
|
6
|
287
|
10
|
288 <ul>
|
|
289 <li>the factors to be cut are greyed out.</li>
|
|
290 </ul>
|
|
291
|
|
292 <p><img src="pictures/factors.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
293
|
|
294
|
|
295
|
|
296 </section>
|
|
297 </div></div>
|
6
|
298
|
13
|
299 <div class="slide" id="16"><div>
|
10
|
300 <section>
|
|
301 <header>
|
|
302 <h1 id="model-of-greetings-assumptions-1---5">Model of Greetings: Assumptions (1 - 5)</h1>
|
|
303 </header>
|
|
304 <!-- _S9SLIDE_ -->
|
6
|
305
|
10
|
306 <ul>
|
|
307 <li>The simplification was guided by the following ten assumptions.</li>
|
|
308 <li>Only two individuals (a robot and a human participant): we do not take in consideration a higher number of individuals.</li>
|
|
309 <li>Eye contact is taken for granted.</li>
|
|
310 <li>Age is considered part of ‘power relationship’</li>
|
|
311 <li>Regionally is not considered.</li>
|
|
312 <li>Setting is not considered</li>
|
|
313 </ul>
|
|
314
|
|
315
|
|
316
|
|
317 </section>
|
|
318 </div></div>
|
6
|
319
|
13
|
320 <div class="slide" id="17"><div>
|
10
|
321 <section>
|
|
322 <header>
|
|
323 <h1 id="model-of-greetings-assumptions-6---10">Model of Greetings: Assumptions (6 - 10)</h1>
|
|
324 </header>
|
|
325 <!-- _S9SLIDE_ -->
|
6
|
326
|
10
|
327 <ul>
|
|
328 <li>Physical distance is close enough to allow interaction</li>
|
|
329 <li>Gender is intended to be a same-sex dyad</li>
|
|
330 <li>Affect is considered together with ‘social distance’</li>
|
|
331 <li>Time since the last interaction is partially included in ‘social distance’</li>
|
|
332 <li>Intimacy and politeness are not necessary</li>
|
|
333 </ul>
|
|
334
|
|
335
|
|
336
|
|
337 </section>
|
|
338 </div></div>
|
|
339
|
13
|
340 <div class="slide" id="18"><div>
|
10
|
341 <section>
|
|
342 <header>
|
|
343 <h1 id="model-of-greetings-basis-of-classification">Model of Greetings: Basis of classification</h1>
|
|
344 </header>
|
|
345 <!-- _S9SLIDE_ -->
|
6
|
346
|
10
|
347 <ul>
|
|
348 <li>Input
|
|
349 <ul>
|
|
350 <li>All the other factors are then considered features of a mapping problem</li>
|
|
351 <li>They are categorical data, as they can assume only two or three values.</li>
|
|
352 </ul>
|
|
353 </li>
|
|
354 <li>Output
|
|
355 <ul>
|
|
356 <li>The outputs can also assume only a limited set of categorical values.</li>
|
|
357 </ul>
|
|
358 </li>
|
|
359 </ul>
|
|
360
|
|
361
|
|
362
|
|
363 </section>
|
|
364 </div></div>
|
6
|
365
|
13
|
366 <div class="slide" id="19"><div>
|
10
|
367 <section>
|
|
368 <header>
|
|
369 <h1 id="model-of-greetings-features-mapping-discriminants-classes-and-possible-status">Model of Greetings: Features, mapping discriminants, classes, and possible status</h1>
|
|
370 </header>
|
|
371 <!-- _S9SLIDE_ -->
|
|
372
|
|
373 <p><img src="pictures/classes.png" style="width: 60%; margin-left: 150px;" /></p>
|
|
374
|
|
375
|
|
376
|
|
377 </section>
|
|
378 </div></div>
|
|
379
|
13
|
380 <div class="slide" id="20"><div>
|
10
|
381 <section>
|
|
382 <header>
|
|
383 <h1 id="model-of-greetings-overview-of-the-greeting-model">Model of Greetings: Overview of the greeting model</h1>
|
|
384 </header>
|
|
385 <!-- _S9SLIDE_ -->
|
|
386
|
|
387 <ul>
|
|
388 <li>Greeting model takes context data as input and produces the appropriate robot posture and speech for that input.</li>
|
|
389 <li>The two outputs evaluated by the participants of the experiment through written questionnaires.</li>
|
|
390 <li>These training data that we get from the experience are given as feedback to the two mappings.</li>
|
|
391 </ul>
|
|
392
|
6
|
393
|
|
394
|
10
|
395 </section>
|
|
396 </div></div>
|
|
397
|
13
|
398 <div class="slide" id="21"><div>
|
10
|
399 <section>
|
|
400 <header>
|
|
401 <h1 id="model-of-greetings-overview-of-the-greeting-model-1">Model of Greetings: Overview of the greeting model</h1>
|
|
402 </header>
|
|
403 <!-- _S9SLIDE_ -->
|
|
404
|
|
405 <p><img src="pictures/model_overview.png" style="width: 75%; margin-left: 120px;" /></p>
|
|
406
|
|
407
|
|
408
|
|
409 </section>
|
|
410 </div></div>
|
|
411
|
13
|
412 <div class="slide" id="22"><div>
|
10
|
413 <section>
|
|
414 <header>
|
|
415 <h1 id="greeting-selection-system-training-data">Greeting selection system training data</h1>
|
|
416 </header>
|
|
417 <!-- _S9SLIDE_ -->
|
|
418
|
|
419 <ul>
|
|
420 <li>Mappings can be trained to an initial state with data taken from the literature of sociology studies.</li>
|
|
421 <li>Training data should be classified through some machine learning method or formula.</li>
|
|
422 <li>We decided to use conditional probabilities: in particular the Naive Bayes formula to map data.</li>
|
|
423 <li>Naive Bayes only requires a small amount of training data.</li>
|
|
424 </ul>
|
|
425
|
|
426
|
6
|
427
|
10
|
428 </section>
|
|
429 </div></div>
|
|
430
|
13
|
431 <div class="slide" id="23"><div>
|
10
|
432 <section>
|
|
433 <header>
|
|
434 <h1 id="model-of-greetings-details-of-training-data">Model of Greetings: Details of training data</h1>
|
|
435 </header>
|
|
436 <!-- _S9SLIDE_ -->
|
|
437
|
|
438 <ul>
|
|
439 <li>While training data of gestures can be obtained from the literature, data of words can also be obtained from text corpora.</li>
|
|
440 <li>English: English corpora, such as British National Corpus, or the Corpus of Historical American English, are used.</li>
|
|
441 <li>Japanese: extracted from data sets by [24, 37, 41-43]. Analyze Corpus on Japanese is difficult.</li>
|
|
442 </ul>
|
|
443
|
|
444
|
|
445
|
|
446 </section>
|
|
447 </div></div>
|
|
448
|
13
|
449 <div class="slide" id="24"><div>
|
10
|
450 <section>
|
|
451 <header>
|
|
452 <h1 id="model-of-greetings-location-assumption">Model of Greetings: Location Assumption</h1>
|
|
453 </header>
|
|
454 <!-- _S9SLIDE_ -->
|
|
455
|
|
456 <ul>
|
|
457 <li>The location of the experiment was Germany.</li>
|
|
458 <li>For this reason, the only dataset needed was the Japanese.</li>
|
|
459 <li>As stated in the motivations at the beginning of this paper, the robot should initially behave like a foreigner.</li>
|
|
460 <li>ARMAR-IIIb, trained with Japanese data, will have to interact with German people and adapt to their customs.</li>
|
|
461 </ul>
|
|
462
|
6
|
463
|
|
464
|
10
|
465 </section>
|
|
466 </div></div>
|
|
467
|
13
|
468 <div class="slide" id="25"><div>
|
10
|
469 <section>
|
|
470 <header>
|
|
471 <h1 id="model-of-greetings-mappings-and-questionnaires">Model of Greetings: Mappings and questionnaires</h1>
|
|
472 </header>
|
|
473 <!-- _S9SLIDE_ -->
|
|
474
|
|
475 <ul>
|
|
476 <li>The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type.</li>
|
|
477 <li>We now need to update these weights.</li>
|
|
478 </ul>
|
|
479
|
|
480
|
|
481
|
|
482 </section>
|
|
483 </div></div>
|
|
484
|
13
|
485 <div class="slide" id="26"><div>
|
10
|
486 <section>
|
|
487 <header>
|
|
488 <h1 id="feedback-from-three-questionnaires">feedback from three questionnaires</h1>
|
|
489 </header>
|
|
490 <!-- _S9SLIDE_ -->
|
|
491
|
|
492 <ul>
|
|
493 <li>Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not.</li>
|
|
494 <li>In the former case, the weights are directly read from the dataset</li>
|
|
495 <li>in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier.</li>
|
|
496 <li>The output is the chosen greeting, after which the interaction will be evaluated through a questionnaires.</li>
|
|
497 </ul>
|
|
498
|
6
|
499
|
|
500
|
10
|
501 </section>
|
|
502 </div></div>
|
|
503
|
13
|
504 <div class="slide" id="27"><div>
|
10
|
505 <section>
|
|
506 <header>
|
|
507 <h1 id="model-of-greetings-three-questionnaires-for-feedback">Model of Greetings: Three questionnaires for feedback</h1>
|
|
508 </header>
|
|
509 <!-- _S9SLIDE_ -->
|
|
510
|
|
511 <ul>
|
|
512 <li>answers of questionnaires are five-point semantic differential scale:
|
|
513 <ol>
|
|
514 <li>How appropriate was the greeting chosen by the robot for the current context?</li>
|
|
515 <li>(If the evaluation at point 1 was <= 3) which greeting type would have been appropriate instead?</li>
|
|
516 <li>(If the evaluation at point 1 was <= 3) which context would have been appropriate, if any, for the greeting type of point 1?</li>
|
|
517 </ol>
|
|
518 </li>
|
|
519 </ul>
|
6
|
520
|
|
521
|
10
|
522
|
|
523 </section>
|
|
524 </div></div>
|
|
525
|
13
|
526 <div class="slide" id="28"><div>
|
10
|
527 <section>
|
|
528 <header>
|
|
529 <h1 id="model-of-greetings-feedback-and-terminate-condition">Model of Greetings: feedback and terminate condition</h1>
|
|
530 </header>
|
|
531 <!-- _S9SLIDE_ -->
|
|
532
|
|
533 <ul>
|
|
534 <li>Weights of the affected features are multiplied by a positive or negative reward (inspired by reinforcement learning) which is calculated proportionally to the evaluation.</li>
|
|
535 <li>Mappings stop evolving when the following two stopping conditions are satisfied</li>
|
|
536 <li>all possible values of all features have been explored</li>
|
|
537 <li>and the moving average of the latest 10 state transitions has decreased below a certain threshold.</li>
|
|
538 </ul>
|
|
539
|
6
|
540
|
|
541
|
10
|
542 </section>
|
|
543 </div></div>
|
|
544
|
13
|
545 <div class="slide" id="29"><div>
|
10
|
546 <section>
|
|
547 <header>
|
|
548 <h1 id="model-of-greetings-summary">Model of Greetings: Summary</h1>
|
|
549 </header>
|
|
550 <!-- _S9SLIDE_ -->
|
|
551
|
|
552 <ul>
|
|
553 <li>Thanks to this implementation, mappings can evolve quickly, without requiring hundreds or thousands of iterations</li>
|
|
554 <li>but rather a number comparable to the low number of interactions humans need to understand and adapt to social rules.</li>
|
|
555 </ul>
|
|
556
|
|
557
|
|
558
|
|
559 </section>
|
|
560 </div></div>
|
6
|
561
|
13
|
562 <div class="slide" id="30"><div>
|
10
|
563 <section>
|
|
564 <header>
|
|
565 <h1 id="implementation-on-armar-iiib">Implementation on ARMAR-IIIb</h1>
|
|
566 </header>
|
|
567 <!-- _S9SLIDE_ -->
|
|
568
|
|
569 <ul>
|
|
570 <li>ARMAR-III is designed for close cooperation with humans</li>
|
|
571 <li>ARMAR-III has a humanlike appearance</li>
|
|
572 <li>sensory capabilities similar to humans</li>
|
|
573 <li>ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands</li>
|
|
574 </ul>
|
|
575
|
6
|
576
|
|
577
|
10
|
578 </section>
|
|
579 </div></div>
|
|
580
|
13
|
581 <div class="slide" id="31"><div>
|
10
|
582 <section>
|
|
583 <header>
|
|
584 <h1 id="implementation-of-gestures">Implementation of gestures</h1>
|
|
585 </header>
|
|
586 <!-- _S9SLIDE_ -->
|
|
587
|
|
588 <ul>
|
|
589 <li>The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware</li>
|
|
590 <li>manually defining the patterns of the gestures</li>
|
|
591 <li>Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot</li>
|
|
592 </ul>
|
|
593
|
|
594
|
|
595
|
|
596 </section>
|
|
597 </div></div>
|
|
598
|
13
|
599 <div class="slide" id="32"><div>
|
10
|
600 <section>
|
|
601 <header>
|
|
602 <h1 id="master-motor-map">Master Motor Map</h1>
|
|
603 </header>
|
|
604 <!-- _S9SLIDE_ -->
|
|
605
|
|
606 <ul>
|
|
607 <li>The MMM is a reference 3D kinematic model</li>
|
|
608 <li>providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules</li>
|
|
609 <li>This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots</li>
|
|
610 <li>The MMM is intended to become a common standard in the robotics community</li>
|
|
611 </ul>
|
|
612
|
6
|
613
|
|
614
|
10
|
615 </section>
|
|
616 </div></div>
|
|
617
|
13
|
618 <div class="slide" id="33"><div>
|
10
|
619 <section>
|
|
620 <header>
|
|
621 <h1 id="master-motor-map-1">Master Motor Map</h1>
|
|
622 </header>
|
|
623 <!-- _S9SLIDE_ -->
|
|
624
|
|
625 <p><img src="pictures/MMM.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
626
|
|
627
|
|
628
|
|
629 </section>
|
|
630 </div></div>
|
|
631
|
13
|
632 <div class="slide" id="34"><div>
|
10
|
633 <section>
|
|
634 <header>
|
|
635 <h1 id="master-motor-map-2">Master Motor Map</h1>
|
|
636 </header>
|
|
637 <!-- _S9SLIDE_ -->
|
|
638
|
|
639 <ul>
|
|
640 <li>The body model of MMM model can be seen in the left-hand illustration in Figure</li>
|
|
641 <li>It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots</li>
|
|
642 <li>A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model</li>
|
|
643 </ul>
|
|
644
|
|
645
|
|
646
|
|
647 </section>
|
|
648 </div></div>
|
|
649
|
13
|
650 <div class="slide" id="35"><div>
|
10
|
651 <section>
|
|
652 <header>
|
|
653 <h1 id="master-motor-map-3">Master Motor Map</h1>
|
|
654 </header>
|
|
655 <!-- _S9SLIDE_ -->
|
|
656
|
|
657 <p><img src="pictures/MMMModel.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
658
|
6
|
659
|
|
660
|
10
|
661 </section>
|
|
662 </div></div>
|
|
663
|
13
|
664 <div class="slide" id="36"><div>
|
10
|
665 <section>
|
|
666 <header>
|
|
667 <h1 id="mmm-support">MMM support</h1>
|
|
668 </header>
|
|
669 <!-- _S9SLIDE_ -->
|
|
670
|
|
671 <ul>
|
|
672 <li>The MMM framework has a high support for every kind of human-like robot</li>
|
|
673 <li>MMM can define the transfer rules</li>
|
|
674 <li>Using the conversion rules, it can be converted from the MMM Model to the movement of the robot</li>
|
|
675 <li>may not be able to convert from MMM model for a specific robot</li>
|
|
676 <li>the motion representation parts of the MMM can be used nevertheless</li>
|
|
677 </ul>
|
|
678
|
|
679
|
|
680
|
|
681 </section>
|
|
682 </div></div>
|
|
683
|
13
|
684 <div class="slide" id="37"><div>
|
10
|
685 <section>
|
|
686 <header>
|
|
687 <h1 id="conversion-example-of-mmm">Conversion example of MMM</h1>
|
|
688 </header>
|
|
689 <!-- _S9SLIDE_ -->
|
|
690
|
|
691 <ul>
|
11
|
692 <li>After programming the motion on the MMM model they were processed by the converter</li>
|
10
|
693 <li>the human model contains many joints, which are not present in the robot configuration</li>
|
|
694 <li>ARMAR is not bending the body when performing a bow</li>
|
|
695 <li>It was expressed using a portion present in the robot (e.g., the neck)</li>
|
|
696 </ul>
|
|
697
|
|
698
|
|
699
|
|
700 </section>
|
|
701 </div></div>
|
|
702
|
13
|
703 <div class="slide" id="38"><div>
|
10
|
704 <section>
|
|
705 <header>
|
|
706 <h1 id="gestureexample">GestureExample</h1>
|
|
707 </header>
|
|
708 <!-- _S9SLIDE_ -->
|
|
709
|
|
710 <p><img src="pictures/GestureExample.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
711
|
6
|
712
|
|
713
|
10
|
714 </section>
|
|
715 </div></div>
|
|
716
|
13
|
717 <div class="slide" id="39"><div>
|
10
|
718 <section>
|
|
719 <header>
|
|
720 <h1 id="implementgesturearmar">ImplementGestureARMARⅢ</h1>
|
|
721 </header>
|
|
722 <!-- _S9SLIDE_ -->
|
|
723
|
|
724 <p><img src="pictures/ImplementGestureARMARⅢ.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
725
|
|
726
|
|
727
|
|
728 </section>
|
|
729 </div></div>
|
|
730
|
13
|
731 <div class="slide" id="40"><div>
|
10
|
732 <section>
|
|
733 <header>
|
|
734 <h1 id="modular-controller-architecture-a-modular-software-framework">Modular Controller Architecture, a modular software framework</h1>
|
|
735 </header>
|
|
736 <!-- _S9SLIDE_ -->
|
|
737
|
|
738 <ul>
|
|
739 <li>The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented</li>
|
|
740 <li>the list of postures is on the left together with the option</li>
|
|
741 <li>When that option is activated, it is possible to select the context parameters through the radio buttons on the right</li>
|
|
742 </ul>
|
|
743
|
|
744
|
|
745
|
|
746 </section>
|
|
747 </div></div>
|
6
|
748
|
13
|
749 <div class="slide" id="41"><div>
|
10
|
750 <section>
|
|
751 <header>
|
|
752 <h1 id="modular-controller-architecture-a-modular-software-framework-1">Modular Controller Architecture, a modular software framework</h1>
|
|
753 </header>
|
|
754 <!-- _S9SLIDE_ -->
|
|
755
|
|
756 <p><img src="pictures/MCA.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
757
|
|
758
|
|
759
|
|
760 </section>
|
|
761 </div></div>
|
|
762
|
13
|
763 <div class="slide" id="42"><div>
|
10
|
764 <section>
|
|
765 <header>
|
|
766 <h1 id="implementation-of-words">Implementation of words</h1>
|
|
767 </header>
|
|
768 <!-- _S9SLIDE_ -->
|
|
769
|
|
770 <ul>
|
|
771 <li>Word of greeting uses two of the Japanese and German</li>
|
|
772 <li>For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」</li>
|
|
773 <li>where a standard greeting like 「konnichi wa」 would be inappropriate</li>
|
|
774 <li>In German, such a greeting type does not exist</li>
|
|
775 <li>but the meaning of “thank you for your effort” at work can be directly translated into German</li>
|
|
776 <li>the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts</li>
|
|
777 </ul>
|
|
778
|
|
779
|
|
780
|
|
781 </section>
|
|
782 </div></div>
|
|
783
|
13
|
784 <div class="slide" id="43"><div>
|
10
|
785 <section>
|
|
786 <header>
|
|
787 <h1 id="table-of-greeting-words">table of greeting words</h1>
|
|
788 </header>
|
|
789 <!-- _S9SLIDE_ -->
|
|
790
|
|
791 <p><img src="pictures/tableofgreetingwords.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
792
|
6
|
793
|
|
794
|
10
|
795 </section>
|
|
796 </div></div>
|
|
797
|
13
|
798 <div class="slide" id="44"><div>
|
10
|
799 <section>
|
|
800 <header>
|
|
801 <h1 id="implementation-of-words-1">Implementation of words</h1>
|
|
802 </header>
|
|
803 <!-- _S9SLIDE_ -->
|
|
804
|
|
805 <ul>
|
|
806 <li>These words have been recorded through free text-to-speech software into wave files that could be played by the robot</li>
|
|
807 <li>ARMAR does not have embedded speakers in its body</li>
|
|
808 <li>added two small speakers behind the head and connected them to another computer</li>
|
|
809 </ul>
|
|
810
|
|
811
|
|
812
|
|
813 </section>
|
|
814 </div></div>
|
|
815
|
13
|
816 <div class="slide" id="45"><div>
|
10
|
817 <section>
|
|
818 <header>
|
|
819 <h1 id="experiment-description">Experiment description</h1>
|
|
820 </header>
|
|
821 <!-- _S9SLIDE_ -->
|
|
822
|
|
823 <ul>
|
|
824 <li>Experiments were conducted at room as shown in Figure , Germany
|
|
825 <img src="pictures/room.png" style="width: 60%; margin-left: 150px; margin-top: 50px;" /></li>
|
|
826 </ul>
|
|
827
|
|
828
|
|
829
|
|
830 </section>
|
|
831 </div></div>
|
|
832
|
13
|
833 <div class="slide" id="46"><div>
|
10
|
834 <section>
|
|
835 <header>
|
|
836 <h1 id="experiment-description2">Experiment description2</h1>
|
|
837 </header>
|
|
838 <!-- _S9SLIDE_ -->
|
|
839
|
|
840 <ul>
|
|
841 <li>Participants were 18 German people of different ages, genders, workplaces</li>
|
|
842 <li>robot could be trained with various combinations of context</li>
|
|
843 <li>It was not possible to include all combinations of feature values in the experiment</li>
|
|
844 <li>for example there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’]</li>
|
|
845 <li>the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home</li>
|
|
846 </ul>
|
|
847
|
6
|
848
|
|
849
|
10
|
850 </section>
|
|
851 </div></div>
|
|
852
|
13
|
853 <div class="slide" id="47"><div>
|
10
|
854 <section>
|
|
855 <header>
|
|
856 <h1 id="experiment-description3">Experiment description3</h1>
|
|
857 </header>
|
|
858 <!-- _S9SLIDE_ -->
|
|
859
|
|
860 <ul>
|
|
861 <li>repeated the experiment more than</li>
|
|
862 <li>for example experiment is repeated at different times</li>
|
|
863 <li>Change the acquaintance from unknown social distance at the time of exchange</li>
|
|
864 <li>we could collect more data by manipulating the value of a single feature</li>
|
|
865 </ul>
|
|
866
|
|
867
|
|
868
|
|
869 </section>
|
|
870 </div></div>
|
|
871
|
13
|
872 <div class="slide" id="48"><div>
|
10
|
873 <section>
|
|
874 <header>
|
|
875 <h1 id="statistics-of-participants">Statistics of participants</h1>
|
|
876 </header>
|
|
877 <!-- _S9SLIDE_ -->
|
|
878
|
|
879 <ul>
|
|
880 <li>The demographics of the 18 participants were as follows
|
|
881 <ol>
|
|
882 <li>gender :M: 10; F: 8</li>
|
|
883 <li>average age: 31.33</li>
|
|
884 <li>age standard deviation:13.16</li>
|
|
885 </ol>
|
|
886 </li>
|
|
887 </ul>
|
|
888
|
6
|
889
|
|
890
|
10
|
891 </section>
|
|
892 </div></div>
|
|
893
|
13
|
894 <div class="slide" id="49"><div>
|
10
|
895 <section>
|
|
896 <header>
|
|
897 <h1 id="tatistics-of-participants">tatistics of participants</h1>
|
|
898 </header>
|
|
899 <!-- _S9SLIDE_ -->
|
|
900
|
|
901 <ul>
|
|
902 <li>the number of interactions was determined by the stopping condition of the algorithm</li>
|
|
903 <li>The number of interactions taking repetitions into account was 30
|
|
904 <ol>
|
|
905 <li>gender :M: 18; F: 12</li>
|
|
906 <li>average age: 29.43</li>
|
|
907 <li>age standard deviation: 12.46</li>
|
|
908 </ol>
|
|
909 </li>
|
|
910 </ul>
|
6
|
911
|
|
912
|
10
|
913
|
|
914 </section>
|
|
915 </div></div>
|
|
916
|
13
|
917 <div class="slide" id="50"><div>
|
10
|
918 <section>
|
|
919 <header>
|
|
920 <h1 id="the-experiment-protocol-is-as-follows-15">The experiment protocol is as follows 1~5</h1>
|
|
921 </header>
|
|
922 <!-- _S9SLIDE_ -->
|
|
923
|
|
924 <ol>
|
|
925 <li>ARMAR-IIIb is trained with Japanese data</li>
|
|
926 <li>encounter are given as inputs to the algorithm and the robot is prepared</li>
|
|
927 <li>Participants entered the room , you are prompted to interact with consideration robot the current situation</li>
|
|
928 <li>The participant enters the room</li>
|
|
929 <li>The robot’s greeting is triggered by an operator as the human participant approaches</li>
|
|
930 </ol>
|
|
931
|
6
|
932
|
|
933
|
10
|
934 </section>
|
|
935 </div></div>
|
|
936
|
13
|
937 <div class="slide" id="51"><div>
|
10
|
938 <section>
|
|
939 <header>
|
|
940 <h1 id="the-experiment-protocol-is-as-follows-610">The experiment protocol is as follows 6~10</h1>
|
|
941 </header>
|
|
942 <!-- _S9SLIDE_ -->
|
|
943
|
|
944 <ol>
|
|
945 <li>After the two parties have greeted each other, the robot is turned off</li>
|
|
946 <li>the participant evaluates the robot’s behaviour through a questionnaire</li>
|
|
947 <li>The mapping is updated using the subject’s feedback</li>
|
|
948 <li>Repeat steps 2–8 for each participant</li>
|
|
949 <li>Training stops after the state changes are stabilized</li>
|
|
950 </ol>
|
|
951
|
|
952
|
|
953
|
|
954 </section>
|
|
955 </div></div>
|
|
956
|
13
|
957 <div class="slide" id="52"><div>
|
10
|
958 <section>
|
|
959 <header>
|
|
960 <h1 id="results">Results</h1>
|
|
961 </header>
|
|
962 <!-- _S9SLIDE_ -->
|
|
963
|
|
964 <ul>
|
|
965 <li>It referred to how the change in the gesture of the experiment</li>
|
|
966 <li>It has become common Bowing is greatly reduced handshake</li>
|
|
967 <li>It has appeared hug that does not exist in Japan of mapping</li>
|
|
968 <li>This is because the participants issued a feedback that hug is appropriate</li>
|
|
969 </ul>
|
|
970
|
|
971
|
6
|
972
|
10
|
973 </section>
|
|
974 </div></div>
|
|
975
|
13
|
976 <div class="slide" id="53"><div>
|
10
|
977 <section>
|
|
978 <header>
|
|
979 <h1 id="results-1">Results</h1>
|
|
980 </header>
|
|
981 <!-- _S9SLIDE_ -->
|
|
982
|
|
983 <p><img src="pictures/GestureTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
984
|
|
985
|
|
986
|
|
987 </section>
|
|
988 </div></div>
|
|
989
|
13
|
990 <div class="slide" id="54"><div>
|
10
|
991 <section>
|
|
992 <header>
|
|
993 <h1 id="results-2">Results</h1>
|
|
994 </header>
|
|
995 <!-- _S9SLIDE_ -->
|
|
996
|
|
997 <ul>
|
|
998 <li>The biggest change in the words of the mapping , are gone workplace of greeting</li>
|
|
999 <li>Is the use of informal greeting as a small amount of change</li>
|
|
1000 </ul>
|
|
1001
|
|
1002
|
|
1003
|
|
1004 </section>
|
|
1005 </div></div>
|
|
1006
|
13
|
1007 <div class="slide" id="55"><div>
|
10
|
1008 <section>
|
|
1009 <header>
|
|
1010 <h1 id="results-3">Results</h1>
|
|
1011 </header>
|
|
1012 <!-- _S9SLIDE_ -->
|
|
1013
|
|
1014 <p><img src="pictures/GreetingWordTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p>
|
|
1015
|
6
|
1016
|
10
|
1017
|
|
1018 </section>
|
|
1019 </div></div>
|
|
1020
|
13
|
1021 <div class="slide" id="56"><div>
|
10
|
1022 <section>
|
|
1023 <header>
|
|
1024 <h1 id="limitations-and-improvements">Limitations and improvements</h1>
|
|
1025 </header>
|
|
1026 <!-- _S9SLIDE_ -->
|
|
1027
|
|
1028 <ul>
|
|
1029 <li>The first obvious limitation is related to the manual input of context data</li>
|
|
1030 <li>The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human</li>
|
|
1031 </ul>
|
|
1032
|
|
1033
|
|
1034
|
|
1035 </section>
|
|
1036 </div></div>
|
|
1037
|
13
|
1038 <div class="slide" id="57"><div>
|
10
|
1039 <section>
|
|
1040 <header>
|
|
1041 <h1 id="limitations-and-improvements-1">Limitations and improvements</h1>
|
|
1042 </header>
|
|
1043 <!-- _S9SLIDE_ -->
|
|
1044
|
|
1045 <ul>
|
11
|
1046 <li>Speech recognition system and cameras could also detect the human own greeting</li>
|
10
|
1047 <li>Robot itself , to determine whether the greeting was correct</li>
|
|
1048 <li>The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected</li>
|
|
1049 </ul>
|
|
1050
|
|
1051
|
|
1052
|
|
1053 </section>
|
|
1054 </div></div>
|
6
|
1055
|
13
|
1056 <div class="slide" id="58"><div>
|
10
|
1057 <section>
|
|
1058 <header>
|
|
1059 <h1 id="limitations-and-improvements-2">Limitations and improvements</h1>
|
|
1060 </header>
|
|
1061 <!-- _S9SLIDE_ -->
|
|
1062
|
|
1063 <ul>
|
|
1064 <li>It is possible to extend the set of context by using a plurality of documents</li>
|
|
1065 </ul>
|
|
1066
|
|
1067
|
|
1068
|
|
1069 </section>
|
|
1070 </div></div>
|
6
|
1071
|
13
|
1072 <div class="slide" id="59"><div>
|
10
|
1073 <section>
|
|
1074 <header>
|
|
1075 <h1 id="different-kinds-of-embodiment">Different kinds of embodiment</h1>
|
|
1076 </header>
|
|
1077 <!-- _S9SLIDE_ -->
|
|
1078
|
|
1079 <ul>
|
|
1080 <li>Humanoid robot has a body similar to the human</li>
|
|
1081 <li>robot can change shape , the size capability</li>
|
|
1082 <li>By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself</li>
|
|
1083 </ul>
|
6
|
1084
|
10
|
1085 <style>
|
|
1086 .slide.cover H2 { font-size: 60px; }
|
|
1087 </style>
|
|
1088
|
|
1089 <!-- vim: set filetype=markdown.slide: -->
|
|
1090 <!-- === end markdown block === -->
|
|
1091
|
|
1092 </section>
|
|
1093 </div></div>
|
|
1094
|
|
1095
|
|
1096 <script src="scripts/script.js"></script>
|
|
1097 <!-- Copyright © 2010–2011 Vadim Makeev, http://pepelsbey.net/ -->
|
6
|
1098 </body>
|
|
1099 </html>
|