comparison slide.md @ 10:3bee23948f70

nozomi-finish
author Nozomi Teruya <e125769@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 10:06:35 +0900
parents 8b5af40f3a04
children 8ae14c56ea14
comparison
equal deleted inserted replaced
8:8b5af40f3a04 10:3bee23948f70
97 - anything that consists of software components with a physical embodiment and interacts with the environment through sensors or actuators/robots is considered to be a PEIS, and a set of interconnected physically embedded intelligent systems is defined as a PEIS ecology 97 - anything that consists of software components with a physical embodiment and interacts with the environment through sensors or actuators/robots is considered to be a PEIS, and a set of interconnected physically embedded intelligent systems is defined as a PEIS ecology
98 - tasks can be achieved using either centralized or distributed approaches using the PEIS ecology 98 - tasks can be achieved using either centralized or distributed approaches using the PEIS ecology
99 99
100 # 2. Related research 100 # 2. Related research
101 - Ubiquitous robotics involves the design and deployment of robots in smart network environments in which everything is interconnected 101 - Ubiquitous robotics involves the design and deployment of robots in smart network environments in which everything is interconnected
102 - define three types of Ubibots 102 - define three types of Ubibots
103 - software robots (Sobots) 103 - software robots (Sobots)
104 - embedded robots (Embots) 104 - embedded robots (Embots)
105 - mobile robots (Mobots) 105 - mobile robots (Mobots)
106 - can provide services using various devices through any network, at any place and at any time in a ubiquitous space (u-space) 106 - can provide services using various devices through any network, at any place and at any time in a ubiquitous space (u-space)
107 107
108 # 2. Related research 108 # 2. Related research
109 - Embots can evaluate the current state of the environment using sensors, and convey that information to users 109 - Embots can evaluate the current state of the environment using sensors, and convey that information to users
110 - Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms 110 - Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms
111 - Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans 111 - Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans
112 - The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space 112 - The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space
113 113
114 # 2. Related research 114 # 2. Related research
115 - RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment 115 - RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment
116 - the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots 116 - the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots
117 117
118 # 2. Related research 118 # 2. Related research
125 # 2. Related research 125 # 2. Related research
126 - in section 5, we present a coordinate motion planning technique for a fetch-and-give including handing over an object to a person 126 - in section 5, we present a coordinate motion planning technique for a fetch-and-give including handing over an object to a person
127 - the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI) 127 - the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI)
128 128
129 # 2. Related research 129 # 2. Related research
130 - the work that is closest to ours is the one by Dehais et al 130 - the work that is closest to ours is the one by Dehais et al
131 - in their study, physiological and subjective evaluation for a handing over task was presented 131 - in their study, physiological and subjective evaluation for a handing over task was presented
132 - the performance of hand-over tasks were evaluated according to three criteria: legibility, safety and physical comfort 132 - the performance of hand-over tasks were evaluated according to three criteria: legibility, safety and physical comfort
133 - these criteria are represented as fields of cost functions mapped around the human to generate ergonomic hand-over motions 133 - these criteria are represented as fields of cost functions mapped around the human to generate ergonomic hand-over motions
134 - although their approach is similar to our approach, we consider the additional criteria, that is, the manipulability of both a robot and a human for a comfortable and safety fetch-and-give task 134 - although their approach is similar to our approach, we consider the additional criteria, that is, the manipulability of both a robot and a human for a comfortable and safety fetch-and-give task
135 135
274 274
275 # 4.3. Object detection system (ODS) 275 # 4.3. Object detection system (ODS)
276 - the steps of the change detection process are as follows 276 - the steps of the change detection process are as follows
277 1. Identification of furniture 277 1. Identification of furniture
278 2. Alignment of the furniture model 278 2. Alignment of the furniture model
279 3. Object extraction by furniture removal 279 3. Object extraction by furniture removal
280 4. Segmentation of objects 280 4. Segmentation of objects
281 5. Comparison with the stored information 281 5. Comparison with the stored information
282 282
283 # 4.3.1. Identification of furniture 283 # 4.3.1. Identification of furniture
284 - possible to identify furniture based on the position and posture of robots and furniture in the database 284 - possible to identify furniture based on the position and posture of robots and furniture in the database
361 # 4.3.5. Comparison with the stored infomation 361 # 4.3.5. Comparison with the stored infomation
362 362
363 <div style="text-align: center;"> 363 <div style="text-align: center;">
364 <img src="./images/fig12.svg" alt="message" width="600"> 364 <img src="./images/fig12.svg" alt="message" width="600">
365 </div> 365 </div>
366
367 # 5. Robot motion planning
368 * Robot motion planning (TMS_RP) is the component of the ROS–TMS that calculates the movement path of the robot and the trajectories of the robot arm for moving, giving, and avoiding obstacles based on information acquired from TMS_SS
369 * We consider the necessary planning to implement services such as fetch-and-give tasks because such tasks are among the most frequent tasks required by elderly individuals in daily life.
370
371 # 5. Robot motion planning
372 * Robot motion planning includes wagons for services that can carry and deliver a large amount of objects, for example, at tea time or handing out towels to residents in elderly care facilities as shown in Fig. 14a
373 ![opt](./images2/fig14.png){:width="100%"}
374
375 # 5. Robot motion planning
376 * Robot motion planning consists of sub-planning, integration, and evaluation of the planning described below to implement the fetch-and-give task.
377 1. Grasp planning to grip a wagon
378 2. Position planning for goods delivery
379 3. Movement path planning
380 4. Path planning for wagons
381 5. Integration of planning
382 6. Evaluation of efficiency and safety
383 * Each planning, integration, and evaluation process uses environment data obtained from TMS_DB and TMS_SS.
384
385 # 5.1. Grasp planning to grip a wagon
386 * In order for a robot to push a wagon, the robot needs to grasp the wagon at first.
387 * a robot can push a wagon in a stable manner if the robot grasps the wagon from two poles positioned on its sides.
388 * Thus, the number of base position options for the robot with respect to the wagon is reduced to four (i) as shown in Fig. 14.
389 ![opt](./images2/fig14.png){:width="100%"}
390
391 # 5.1. Grasp planning to grip a wagon
392 * The position and orientation of the wagon, as well as its size, is managed using the ROS–TMS database. Using this information, it is possible to determine the correct relative position.
393 * Based on the wagon direction when the robot is grasping its long side, valid candidate points can be determined using Eqs.
394
395 # 5.1. Grasp planning to grip a wagon
396 * Eq. (2) through (4) below (i=0,1,2,3). Here, R represents the robot, and W represents the wagon. Subscripts x, y, and θ represent the corresponding x-coordinate, y-coordinate, and posture (rotation of the z-axis).
397 ![opt](./images2/eq234.png){:width="100%"}
398
399 # 5.1. Grasp planning to grip a wagon
400 * Fig. 13 shows the positional relationship between the robot and the wagon, given i=2.
401 ![opt](./images2/fig13.png){:width="90%"}
402
403 # 5.2. Position planning for goods delivery
404 * In order to hand over goods to a person, it is necessary to plan both the position of the goods to be delivered and the base position of the robot according to the person’s position.
405 * Using manipulability as an indicator for this planning, the system plans the position of the goods relative to the base position.
406 * Manipulability is represented by the degree to which hands/fingers can move when each joint angle is changed.
407
408 # 5.2. Position planning for goods delivery
409 * When trying to deliver goods in postures with high manipulability, it is easier to modify the motion, even when small gaps exist between the robot and the person.
410 * We assume the high manipulability of the arm of the person makes him more comfortable for grasping goods. Their relation is represented in Eqs. (5) and (6).
411 * The velocity vector V corresponds to the position of hands, and Q is the joint angle vector.
412 ![opt](./images2/eq56.png){:width="100%"}
413
414 # 5.2. Position planning for goods delivery
415 * If the arm has a redundant degree of freedom, an infinite number of joint angle vectors corresponds to just one hand position.
416 * When solving this issue, we calculate the posture that represents the highest manipulability within the range of possible joint angle movements.
417
418 # 5.2. Position planning for goods delivery
419 * The planning procedure for the position of goods and the position of robots using manipulability is as follows:
420 1. The system maps the manipulability that corresponds to the robots and each person on the local coordinate system.
421 2. Both manipulability maps are integrated, and the position of goods is determined.
422 3. Based on the position of goods, the base position of the robot is determined.
423 * We set the robot as the origin of the robot coordinate system, assuming the frontal direction as the x-axis and the lateral direction as the y-axis.
424
425 # 5.2. Position planning for goods delivery
426 * This mapping is superimposed along the z-axis, which is the height direction, as shown in Fig. 15b.
427 ![opt](./images2/fig15.png){:width="80%"}
428
429 # 5.2. Position planning for goods delivery
430 * The next step is to determine, using the manipulability map, the position of the goods that are about to be delivered.
431 * As shown in Fig. 16a, we take the maximum manipulability value according to each height, and retain the XY coordinates of each local coordinate system.
432 * These coordinates represent the relationship between the base position and the positions of the hands.
433 ![opt](./images2/fig16.png){:width="80%"}
434
435 # 5.2. Position planning for goods delivery
436 * According to the calculated height on the manipulability map for a person, the system requests the absolute coordinates of the goods to be delivered, using the previously retained relative coordinates of the hands.
437 * The position of the person that will receive the delivered goods is managed through TMS_SS and TMS_DB, and it is also possible to use this position as a reference point to request the position of the goods by fitting the relative coordinates.
438 * According to the aforementioned procedure, we can determine the unique position of the goods that are about to be delivered.
439
440
441 # 5.2. Position planning for goods delivery
442 * As the final step, the base position of the robot is determined in order to hold out the goods to their previously calculated position.
443 * According to the manipulability map that corresponds to the height of a specific object, the system retrieves the relationship between the positions of hands and the base position.
444 * Using the position of the object as a reference point, the robot is able to hold the object out to any determined position if the base position meets the criteria of this relationship.
445
446 # 5.2. Position planning for goods delivery
447 * Consequently, at the time of delivery, points on the circumference of the position of the object are determined to be candidate points on the absolute coordinate system of the base position.
448 * Considering all of the prospect points of the circumference, the following action planning, for which the system extracts multiple candidate points, is redundant.
449 * The best approach is to split the circumference n time, fetch a representative point out of each sector after the split, and limit the number of candidate points.
450
451 # 5.2. Position planning for goods delivery
452 * After that, the obtained representative points are evaluated as in Eq. (7), while placing special emphasis on safety.
453 * Here, View is a Boolean value that represents whether the robot enters the field of vision of the target person. If it is inside the field of vision, then View is 1, otherwise View is 0.
454 * This calculation is necessary because if the robot can enter the field of vision of the target person, then the robot can be operated more easily and the risk of unexpected contact with the robot is also reduced.
455 * Dhuman represents the distance to the target person, and Dobs represents the distance to the nearest obstacle.
456 ![opt](./images2/eq7.png){:width="80%"}
457
458 # 5.2. Position planning for goods delivery
459 * In order to reduce the risk of contact with the target person or an obstacle, the positions that repre
460 * If all the candidate points on a given circumference sector result in contact with an obstacle, then the representative points of that sector are not selected.
461 * According to the aforementioned process, the base position of the robot is planned based on the position of the requested goods.
462
463
464
465
466 # 5.3. Movement path planning - Path planning for robots
467 * Path planning for robots that serve in a general living environment requires a high degree of safety, which can be achieved by lowering the probability of contact with persons.
468 * However, for robots that push wagons, the parameter space that uniquely defines this state has a maximum of six dimensions, that is, position (x,y) and posture (θ) of a robot and a wagon, and planning a path that represents the highest safety values in such a space is time consuming.
469
470
471 # 5.3. Movement path planning - Path planning for robots
472 * Thus, we require a method that produces a trajectory with a high degree of safety, but at the same time requires a short processing time. As such, we use a Voronoi map, as shown in Fig. 18.
473 ![opt](./images2/fig18.png){:width="50%"}
474
475 # 5.3. Movement path planning - Path planning for wagons
476 * In order to be able to plan for wagons in real time, we need to reduce the dimensions of the path search space.
477 * The parameters that uniquely describe the state of a wagon pushing robot can have a maximum of six dimensions, but in reality the range in which the robot can operate the wagon is more limited.
478
479 # 5.3. Movement path planning - Path planning for wagons
480 * We set up a control point, as shown in Fig. 19, which fixes the relative positional relationship of the robot with the control point.
481 ![opt](./images2/fig19.png){:width="90%"}
482
483 # 5.3. Movement path planning - Path planning for wagons
484 * The operation of the robot is assumed to change in terms of the relative orientation (Wθ) of the wagon with respect to the robot.
485 * The range of relative positions is also limited.
486 * Accordingly, wagon-pushing robots are presented in just four dimensions, which shortens the search time for the wagon path planning.
487
488 # 5.3. Movement path planning - Path planning for wagons
489 * Path planning for wagon-pushing robots uses the above-mentioned basic path and is executed as follows:
490 1. The start and end points are established.
491 2. The path for each robot along the basic path is planned.
492 3. According to each point on the path estimated in step 2, the position of the wagon control point is determined considering the manner in which the position of the wagon control point fits the relationship with the robot position.
493 4.
494 # 5.3. Movement path planning - Path planning for wagons
495 4. If the wagon control point is not on the basic path (Fig. 20a), posture (Rθ) of the robot is changed so that the wagon control point passes along the basic path.
496 5. If the head of the wagon is not on the basic path (Fig. 20b), the relative posture (Wθ) of the wagon is modified so that it passes along the basic path.
497 6. Steps 3 through 5 are repeated until the end point is reached
498 ![opt](./images2/fig20.png){:width="50%"}
499
500 # 5.3. Movement path planning - Path planning for wagons
501 * Fig. 21 shows the results of wagon path planning, using example start and end points.
502 ![opt](./images2/fig21.png){:width="70%"}
503
504 # 5.3. Movement path planning - Path planning for wagons
505 * Using this procedure we can simplify the space search without sacrificing the safety of the basic path diagram.
506 * The actual time required to calculate the path of a single robot was 1.10 (ms).
507 * the time including the wagon path planning was 6.41 (ms).
508
509 # 5.4. Integration of planning
510 * We perform operation planning for overall item-carrying action, which integrates position, path and arm motion planning.
511 1. Perform wagon grip position planning in order for the robot to grasp a wagon loaded with goods.
512 2. Perform position planning for goods delivery. The results of these work position planning tasks becomes the candidate movement target positions for the path planning of the robot and the wagon.
513 3. Perform an action planning that combines the above-mentioned planning tasks, from the initial position of the robot to the path the robot takes until grasping the wagon, and the path the wagon takes until the robot reaches the position at which the robot can deliver the goods.
514
515 # 5.4. Integration of planning
516 * For example
517 if there are four candidate positions for wagon gripping and four candidate positions for goods delivery around the target person,
518 then we can plan 16 different actions, as shown in Fig. 22. The various action sequences obtained from this procedure are then evaluated to choose the optimum sequence.
519 ![opt](./images2/fig22.png){:width="70%"}
520
521 # 5.5. Evaluation of efficiency and safety
522 * We evaluate each candidate action sequence based on efficiency and safety, as shown in Eq. (8).
523 * The α,β,γ are respectively the weight values of Length, Rotation and ViewRatio.
524 * The Length and Rotation represent the total distance traveled and total rotation angle
525 * The Len-min and Rot-min represent the minimum values of all the candidate action.
526 * First and second terms of Eq. (8) are the metrics for efficiency of action.
527 * ViewRatio is the number of motion planning points in the person’s visual field out of total number of motion planning point.
528 ![opt](./images2/eq8.png){:width="100%"}
529
530 # 6. Experiments
531 * We present the results of fundamental experiments described below using an actual robot and the proposed ROS–TMS.
532 1. Experiment to detect changes in the environment
533 2. Experiment to examine gripping and delivery of goods
534 3. Simulation of robot motion planning
535 4. Service experiments
536 5. Verification of modularity and scalability
537
538 # 6.1. Experiment to detect changes in the environment
539 * We conducted experiments to detect changes using ODS (Section 4.3) with various pieces of furniture.
540 * We consider six pieces of target furniture, including two tables, two shelves, one chair, and one bed.
541 * For each piece of furniture, we prepared 10 sets of previously stored data and newly acquired data of kinds of goods including books, snacks, cups, etc., and performed point change detection separately for each set.
542
543 # 6.1. Experiment to detect changes in the environment
544 * As the evaluation method, we considered the ratio of change detection with respect to the number of objects that were changed (change detection ratio).
545 * We also considered over-detection, which occurs when the system detects a change that has actually not occurred.
546
547 # 6.1. Experiment to detect changes in the environment
548 * The change detection ratios for each furniture type are as follows: 93.3% for tables, 93.4% for shelves, 84.6% for chairs, and 91.3% for beds.
549 ![opt](./images2/table3.png){:width="100%"}
550
551 # 6.1. Experiment to detect changes in the environment
552 * The sections enclosed by circles in each image represent points that actually underwent changes.
553 ![opt](./images2/fig23.png){:width="100%"}
554
555 # 6.2. Experiment to examine gripping and delivery of goods
556 * We performed an operation experiment in which a robot grasps an object located on a wagon and delivers the object to a person.
557 * As a prerequisite for this service, the goods are assumed to have been placed on the wagon, and their positions are known in advance.
558 * After performing the experiment 10 times, the robot successfully grabbed and delivered the object in all cases.
559 ![opt](./images2/fig24.png){:width="100%"}
560
561 # 6.2. Experiment to examine gripping and delivery of goods
562 * We measured the displacement of the position of goods (Ox or Oy in Fig. 25) and the linear distance (d) between the measured value and the true value at the time of delivery,to verify the effect of rotation errors or arm posture errors.
563
564 ![opt](./images2/fig25.png){:width="50%"}
565 ![right](./images2/table4.png){:width="90%"}
566
567 # 6.2. Experiment to examine gripping and delivery of goods
568 * The distance error of the position of the goods at the time of delivery was 35.8 mm.
569 * According to the manipulability degree, it is possible to cope with these errors, because the system plans a delivery posture with some extra margin in which persons and robots can move their hands.
570
571 # 6.3. Simulation of robot motion planning
572 * We set up one initial position for the robot (Rx,Ry,Rθ)=(1000mm,1000mm, 0°) , the wagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°) , and the target person (Hx,Hy,Hθ)=(1400mm,2500mm, -90°) and assume the person is in a sitting state.
573 * the range of vision of this person is shown in Fig. 26b by the red area.
574 ![opt](./images2/fig26.png){:width="90%"}
575
576 # 6.3. Simulation of robot motion planning
577 * The action planning result that passes over wagon grip candidate 1
578 ![opt](./images2/fig27.png){:width="90%"}
579
580 # 6.3. Simulation of robot motion planning
581 * The action planning result that passes over wagon grip candidate 2
582 ![opt](./images2/fig28.png){:width="90%"}
583
584 # 6.3. Simulation of robot motion planning
585 * Furthermore, the evaluation values that changed the weight of each evaluation for each planning result are listed in Table 5, Table 6 and Table 7.
586
587 ![right](./images2/table5.png){:width="50%"}
588 ![right](./images2/table6.png){:width="50%"}
589 ![right](./images2/table7.png){:width="70%"}
590
591 # 6.3. Simulation of robot motion planning
592 * The actions of Plan 2–3 were the most highly evaluated (Table 5).
593 * Fig. 28a and d indicate that all of the actions occur within the field of vision of the person.
594 * Since the target person can monitor the robot’s actions at all times, the risk of the robot unexpectedly touching a person is lower, and if the robot misses an action, the situation can be dealt with immediately.
595 * The action plan chosen from the above results according to the proposed evaluation values exhibits both efficiency and high safety.
596
597 # 6.4. Service experiments
598 We performed a service experiment for the carriage of goods, in accordance with the combined results of these planning sequences. The state of the sequence of actions is shown in Fig. 29.
599 ![right](./images2/fig29.png){:width="100%"}
600
601 # 6.4. Service experiments
602 * This service was carried out successfully, avoiding any contact with the environment.
603 * The total time for the task execution is 312 sec in case the maximum velocity of SmartPal-V is limited to 10 mm/sec in terms of safety.
604 * The robot position was confirmed to always be within the range of vision of the subject during execution.
605 * Accordingly, we can say that the planned actions had an appropriate level of safety.
606
607 # 6.4. Service experiments
608 * There was a margin for the movement of hands, as shown in Fig. 29f, for which the delivery process could appropriately cope with the movement errors of the robot.
609 * In reality, the maximum error from desired trajectory was about 0.092 m in the experiments.
610
611 # 6.5. Verification of modularity and scalability
612 * We built the ROS–TMS for three types of rooms to verify its high modularity and scalability.
613 * Thanks to high flexibility and scalability of the ROS–TMS, we could set up these various environments in a comparatively short time.
614
615 ![right](./images2/fig30.png){:width="100%"}
616 ![right](./images2/fig31.png){:width="100%"}
617
618 # 7. Conclusions
619 * In the present paper, we have introduced a service robot system with an informationally structured environment named ROS–TMS that is designed to support daily activities of elderly individuals.
620 * The room considered herein contains several sensors to monitor the environment and a person.
621 * The person is assisted by a humanoid robot that uses information about the environment to support various activities.
622
623 # 7. Conclusions
624 * In the present study, we concentrated on detection and fetch-and-give tasks, which we believe will be among most commonly requested tasks by the elderly in their daily lives.
625 * We have presented the various subsystems that are necessary for completing this task and have conducted several independent short-term experiments to demonstrate the suitability of these subsystems, such as a detection task using a sensing system and a fetch-and-give task using a robot motion planning system of the ROS–TMS.
626
627 # 7. Conclusions
628 * Currently, we adopt a deterministic approach for choosing proper data from redundant sensory information based on the reliability pre-defined manually.
629 * Our future work will include the extension to the probabilistic approach for fusing redundant sensory information.
630 * Also, we intend to design and prepare a long-term experiment in which we can test the complete system for a longer period of time