Mercurial > hg > Members > atton > intelligence_robotics
comparison slide.md @ 10:62f384a20c2c
fix
author | tatsuki |
---|---|
date | Fri, 26 Jun 2015 09:09:58 +0900 |
parents | 8a6f547b72c0 |
children | 7104d522d2f0 |
comparison
equal
deleted
inserted
replaced
9:8a6f547b72c0 | 10:62f384a20c2c |
---|---|
162 # TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb) | 162 # TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb) |
163 | 163 |
164 # Implementation on ARMAR-IIIb | 164 # Implementation on ARMAR-IIIb |
165 * ARMAR-III is designed for close cooperation with humans | 165 * ARMAR-III is designed for close cooperation with humans |
166 * ARMAR-III has a humanlike appearance | 166 * ARMAR-III has a humanlike appearance |
167 * sensory capabilities similar to humans( | 167 * sensory capabilities similar to humans |
168 * ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands | 168 * ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands |
169 | 169 |
170 # Implementation of gestures | 170 # Implementation of gestures |
171 * The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware | 171 * The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware |
172 * manually defining the patterns of the gestures | 172 * manually defining the patterns of the gestures |
175 # Master Motor Map | 175 # Master Motor Map |
176 * The MMM is a reference 3D kinematic model | 176 * The MMM is a reference 3D kinematic model |
177 * providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules | 177 * providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules |
178 * This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots | 178 * This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots |
179 * The MMM is intended to become a common standard in the robotics community | 179 * The MMM is intended to become a common standard in the robotics community |
180 | |
181 # Master Motor Map | |
180 <img src="pictures/MMM.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> | 182 <img src="pictures/MMM.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> |
181 | 183 |
182 # Master Motor Map2 | 184 # Master Motor Map |
183 * The body model of MMM model can be seen in the left-hand illustration in Figure | 185 * The body model of MMM model can be seen in the left-hand illustration in Figure |
184 * It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots | 186 * It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots |
185 * A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model | 187 * A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model |
188 | |
189 # Master Motor Map | |
186 <img src="pictures/MMMModel.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> | 190 <img src="pictures/MMMModel.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> |
187 | 191 |
188 # converter | 192 # converter |
189 * converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot | 193 * converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot |
190 * differences in the kinematic structures of a human and the robot one-to-one mapping can hardly show acceptable results in terms of a human like appearance of the reproduced movement | 194 * convert is addressed by applying a post-processing procedure in joint angle space |
191 * this problem is addressed by applying a post-processing procedure in joint angle space | |
192 * the joint angles, given in the MMM format,are optimized concerning the tool centre point position | 195 * the joint angles, given in the MMM format,are optimized concerning the tool centre point position |
193 * solution is estimated by using the joint configuration of the MMM model on the robot | 196 * solution is estimated by using the joint configuration of the MMM model on the robot |
194 | 197 |
195 # MMM support | 198 # MMM support |
196 * The MMM framework has a high support for every kind of human-like robot | 199 * The MMM framework has a high support for every kind of human-like robot |
199 * may not be able to convert from MMM model for a specific robot | 202 * may not be able to convert from MMM model for a specific robot |
200 * the motion representation parts of the MMM can be used nevertheless | 203 * the motion representation parts of the MMM can be used nevertheless |
201 | 204 |
202 # Conversion example of MMM | 205 # Conversion example of MMM |
203 * After programming the postures directly on the MMM model they were processed by the converter | 206 * After programming the postures directly on the MMM model they were processed by the converter |
204 * Conversion is not easy | |
205 * the human model contains many joints, which are not present in the robot configuration | 207 * the human model contains many joints, which are not present in the robot configuration |
206 * ARMAR is not bending the body when performing a bow | 208 * ARMAR is not bending the body when performing a bow |
207 * It was expressed using a portion present in the robot (e.g., the neck) | 209 * It was expressed using a portion present in the robot (e.g., the neck) |
208 | 210 |
209 | 211 |
215 | 217 |
216 # Modular Controller Architecture, a modular software framework | 218 # Modular Controller Architecture, a modular software framework |
217 * The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented | 219 * The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented |
218 * the list of postures is on the left together with the option | 220 * the list of postures is on the left together with the option |
219 * When that option is activated, it is possible to select the context parameters through the radio buttons on the right | 221 * When that option is activated, it is possible to select the context parameters through the radio buttons on the right |
222 | |
223 # Modular Controller Architecture, a modular software framework | |
220 <img src="pictures/MCA.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> | 224 <img src="pictures/MCA.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> |
221 | 225 |
222 # Implementation of words | 226 # Implementation of words |
223 * Word of greeting uses two of the Japanese and German | 227 * Word of greeting uses two of the Japanese and German |
224 * For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」 | 228 * For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」 |
235 * These words have been recorded through free text-to-speech software into wave files that could be played by the robot | 239 * These words have been recorded through free text-to-speech software into wave files that could be played by the robot |
236 * ARMAR does not have embedded speakers in its body | 240 * ARMAR does not have embedded speakers in its body |
237 * added two small speakers behind the head and connected them to another computer | 241 * added two small speakers behind the head and connected them to another computer |
238 | 242 |
239 # Experiment description | 243 # Experiment description |
240 * Experiments were conducted at room as shown in Figure 9 , Germany | 244 * Experiments were conducted at room as shown in Figure , Germany |
241 <img src="pictures/room.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> | 245 <img src="pictures/room.png" style='width: 60%; margin-left: 150px; margin-top: 50px;'> |
242 | 246 |
243 | 247 |
244 # Experiment description2 | 248 # Experiment description2 |
245 * Participants were 18 German people of different ages, genders, workplaces | 249 * Participants were 18 German people of different ages, genders, workplaces |
246 * robot could be trained with various combinations of context | 250 * robot could be trained with various combinations of context |
266 * The number of interactions taking repetitions into account was 30 | 270 * The number of interactions taking repetitions into account was 30 |
267 1. gender :M: 18; F: 12 | 271 1. gender :M: 18; F: 12 |
268 2. average age: 29.43 | 272 2. average age: 29.43 |
269 3. age standard deviation: 12.46 | 273 3. age standard deviation: 12.46 |
270 | 274 |
271 # The purpose of the experiment | |
272 * The objective of the experiment was to adapt ARMAR-IIIb greeting behaviour from Japanese to German culture | |
273 * the algorithm working for ARMAR was trained with only Japanese sociology data and two mappings M0J were built for gestures and words | |
274 * After interacting with German people, the resulting mappings M1 were expected to synthesize the rules of greeting interaction in Germany | |
275 | |
276 # The experiment protocol is as follows 1~5 | 275 # The experiment protocol is as follows 1~5 |
277 1. ARMAR-IIIb is trained with Japanese data | 276 1. ARMAR-IIIb is trained with Japanese data |
278 2. encounter are given as inputs to the algorithm and the robot is prepared | 277 2. encounter are given as inputs to the algorithm and the robot is prepared |
279 3. Participants entered the room , you are prompted to interact with consideration robot the current situation | 278 3. Participants entered the room , you are prompted to interact with consideration robot the current situation |
280 4. The participant enters the room | 279 4. The participant enters the room |
290 # Results | 289 # Results |
291 * It referred to how the change in the gesture of the experiment | 290 * It referred to how the change in the gesture of the experiment |
292 * It has become common Bowing is greatly reduced handshake | 291 * It has become common Bowing is greatly reduced handshake |
293 * It has appeared hug that does not exist in Japan of mapping | 292 * It has appeared hug that does not exist in Japan of mapping |
294 * This is because the participants issued a feedback that hug is appropriate | 293 * This is because the participants issued a feedback that hug is appropriate |
294 | |
295 # Results | |
295 <img src="pictures/GestureTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> | 296 <img src="pictures/GestureTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> |
296 | 297 |
297 # Results2 | 298 # Results |
298 * The biggest change in the words of the mapping , are gone workplace of greeting | 299 * The biggest change in the words of the mapping , are gone workplace of greeting |
299 * Is the use of informal greeting as a small amount of change | 300 * Is the use of informal greeting as a small amount of change |
300 * some other patterns can be found in the gestures mappings judging from the columns in Table 3 for T = 0 | 301 |
301 * Japan there is a pattern to the gesture by social distance | 302 # Results |
302 * But in Germany not the pattern | |
303 * This is characteristic of Japanese society | |
304 * The two mapping has been referring to the feedback of the Japanese sociology literature and the German participants | |
305 <img src="pictures/GreetingWordTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> | 303 <img src="pictures/GreetingWordTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> |
306 | 304 |
307 # Limitations and improvements | 305 # Limitations and improvements |
308 * In the current implementation, there are also a few limitations | |
309 * The first obvious limitation is related to the manual input of context data | 306 * The first obvious limitation is related to the manual input of context data |
310 * → The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human | 307 * The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human |
308 | |
309 # Limitations and improvements | |
310 * Robot itself , to determine whether the greeting was correct | |
311 * Speech recognition system and cameras could also detect the human own greeting | 311 * Speech recognition system and cameras could also detect the human own greeting |
312 * → Robot itself , to determine whether the greeting was correct | |
313 * The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected | 312 * The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected |
314 * Accurate der than this information is information collected using a questionnaire | 313 |
314 #Limitations and improvements | |
315 * It is possible to extend the set of context by using a plurality of documents | 315 * It is possible to extend the set of context by using a plurality of documents |
316 * This in simplification of greeting model was canceled | |
317 | 316 |
318 # Different kinds of embodiment | 317 # Different kinds of embodiment |
319 * Humanoid robot has a body similar to the human | 318 * Humanoid robot has a body similar to the human |
320 * robot can change shape , the size capability | 319 * robot can change shape , the size capability |
321 * Type of greeting me to select the appropriate effect for each robot | 320 * Type of greeting me to select the appropriate effect for each robot |