diff slide.md @ 10:3bee23948f70

nozomi-finish
author Nozomi Teruya <e125769@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 10:06:35 +0900
parents 8b5af40f3a04
children 8ae14c56ea14
line wrap: on
line diff
--- a/slide.md	Fri Jun 03 05:16:27 2016 +0900
+++ b/slide.md	Fri Jun 03 10:06:35 2016 +0900
@@ -99,7 +99,7 @@
 
 # 2. Related research
 - Ubiquitous robotics involves the design and deployment of robots in smart network environments in which everything is interconnected
-- define three types of Ubibots 
+- define three types of Ubibots
     - software robots (Sobots)
     - embedded robots (Embots)
     - mobile robots (Mobots)
@@ -110,7 +110,7 @@
 - Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms
 - Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans
 - The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space
- 
+
 # 2. Related research
 - RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment
 - the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots
@@ -127,7 +127,7 @@
 - the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI)
 
 # 2. Related research
-- the work that is closest to ours is the one by Dehais et al 
+- the work that is closest to ours is the one by Dehais et al
 - in their study, physiological and subjective evaluation for a handing over task was presented
 - the performance of hand-over tasks were evaluated according to three criteria: legibility, safety and physical comfort
 - these criteria are represented as fields of cost functions mapped around the human to generate ergonomic hand-over motions
@@ -276,7 +276,7 @@
 - the steps of the change detection process are as follows
     1. Identification of furniture
     2. Alignment of the furniture model
-    3. Object extraction by furniture removal 
+    3. Object extraction by furniture removal
     4. Segmentation of objects
     5. Comparison with the stored information
 
@@ -363,3 +363,268 @@
 <div style="text-align: center;">
     <img src="./images/fig12.svg" alt="message" width="600">
 </div>
+
+# 5. Robot motion planning
+* Robot motion planning (TMS_RP) is the component of the ROS–TMS that calculates the movement path of the robot and the trajectories of the robot arm for moving, giving, and avoiding obstacles based on information acquired from TMS_SS
+* We consider the necessary planning to implement services such as fetch-and-give tasks because such tasks are among the most frequent tasks required by elderly individuals in daily life.
+
+# 5. Robot motion planning
+* Robot motion planning includes wagons for services that can carry and deliver a large amount of objects, for example, at tea time or handing out towels to residents in elderly care facilities as shown in Fig. 14a  
+![opt](./images2/fig14.png){:width="100%"}
+
+# 5. Robot motion planning
+* Robot motion planning consists of sub-planning, integration, and evaluation of the planning described below to implement the fetch-and-give task.  
+    1. Grasp planning to grip a wagon  
+    2. Position planning for goods delivery  
+    3. Movement path planning  
+    4. Path planning for wagons  
+    5. Integration of planning  
+    6. Evaluation of efficiency and safety  
+* Each planning, integration, and evaluation process uses environment data obtained from TMS_DB and TMS_SS.
+
+# 5.1. Grasp planning to grip a wagon
+* In order for a robot to push a wagon, the robot needs to grasp the wagon at first.
+*  a robot can push a wagon in a stable manner if the robot grasps the wagon from two poles positioned on its sides.
+* Thus, the number of base position options for the robot with respect to the wagon is reduced to four (i) as shown in Fig. 14.
+![opt](./images2/fig14.png){:width="100%"}
+
+# 5.1. Grasp planning to grip a wagon
+* The position and orientation of the wagon, as well as its size, is managed using the ROS–TMS database. Using this information, it is possible to determine the correct relative position.
+* Based on the wagon direction when the robot is grasping its long side, valid candidate points can be determined using Eqs.
+
+# 5.1. Grasp planning to grip a wagon
+* Eq. (2) through (4) below (i=0,1,2,3). Here, R represents the robot, and W represents the wagon. Subscripts x, y, and θ represent the corresponding x-coordinate, y-coordinate, and posture (rotation of the z-axis).
+![opt](./images2/eq234.png){:width="100%"}
+
+# 5.1. Grasp planning to grip a wagon
+* Fig. 13 shows the positional relationship between the robot and the wagon, given i=2.
+![opt](./images2/fig13.png){:width="90%"}
+
+# 5.2. Position planning for goods delivery
+* In order to hand over goods to a person, it is necessary to plan both the position of the goods to be delivered and the base position of the robot according to the person’s position.
+* Using manipulability as an indicator for this planning, the system plans the position of the goods relative to the base position.
+* Manipulability is represented by the degree to which hands/fingers can move when each joint angle is changed.
+
+# 5.2. Position planning for goods delivery
+* When trying to deliver goods in postures with high manipulability, it is easier to modify the motion, even when small gaps exist between the robot and the person.
+* We assume the high manipulability of the arm of the person makes him more comfortable for grasping goods. Their relation is represented in Eqs. (5) and (6).
+* The velocity vector V corresponds to the position of hands, and  Q is the joint angle vector.
+![opt](./images2/eq56.png){:width="100%"}
+
+# 5.2. Position planning for goods delivery
+* If the arm has a redundant degree of freedom, an infinite number of joint angle vectors corresponds to just one hand position.
+* When solving this issue, we calculate the posture that represents the highest manipulability within the range of possible joint angle movements.
+
+# 5.2. Position planning for goods delivery
+* The planning procedure for the position of goods and the position of robots using manipulability is as follows:
+1. The system maps the manipulability that corresponds to the robots and each person on the local coordinate system.
+2. Both manipulability maps are integrated, and the position of goods is determined.
+3. Based on the position of goods, the base position of the robot is determined.
+* We set the robot as the origin of the robot coordinate system, assuming the frontal direction as the x-axis and the lateral direction as the y-axis.
+
+# 5.2. Position planning for goods delivery
+* This mapping is superimposed along the z-axis, which is the height direction, as shown in Fig. 15b.
+![opt](./images2/fig15.png){:width="80%"}
+
+# 5.2. Position planning for goods delivery
+* The next step is to determine, using the manipulability map, the position of the goods that are about to be delivered.
+* As shown in Fig. 16a, we take the maximum manipulability value according to each height, and retain the XY coordinates of each local coordinate system.
+* These coordinates represent the relationship between the base position and the positions of the hands.
+![opt](./images2/fig16.png){:width="80%"}
+
+# 5.2. Position planning for goods delivery
+* According to the calculated height on the manipulability map for a person, the system requests the absolute coordinates of the goods to be delivered, using the previously retained relative coordinates of the hands.
+* The position of the person that will receive the delivered goods is managed through TMS_SS and TMS_DB, and it is also possible to use this position as a reference point to request the position of the goods by fitting the relative coordinates.
+* According to the aforementioned procedure, we can determine the unique position of the goods that are about to be delivered.
+
+
+# 5.2. Position planning for goods delivery
+* As the final step, the base position of the robot is determined in order to hold out the goods to their previously calculated position.
+* According to the manipulability map that corresponds to the height of a specific object, the system retrieves the relationship between the positions of hands and the base position.
+* Using the position of the object as a reference point, the robot is able to hold the object out to any determined position if the base position meets the criteria of this relationship.
+
+# 5.2. Position planning for goods delivery
+* Consequently, at the time of delivery, points on the circumference of the position of the object are determined to be candidate points on the absolute coordinate system of the base position.
+* Considering all of the prospect points of the circumference, the following action planning, for which the system extracts multiple candidate points, is redundant.
+* The best approach is to split the circumference n time, fetch a representative point out of each sector after the split, and limit the number of candidate points.
+
+# 5.2. Position planning for goods delivery
+* After that, the obtained representative points are evaluated as in Eq. (7), while placing special emphasis on safety.
+* Here, View is a Boolean value that represents whether the robot enters the field of vision of the target person. If it is inside the field of vision, then View is 1, otherwise View is 0.
+* This calculation is necessary because if the robot can enter the field of vision of the target person, then the robot can be operated more easily and the risk of unexpected contact with the robot is also reduced.
+* Dhuman represents the distance to the target person, and Dobs represents the distance to the nearest obstacle.
+![opt](./images2/eq7.png){:width="80%"}
+
+# 5.2. Position planning for goods delivery
+* In order to reduce the risk of contact with the target person or an obstacle, the positions that repre
+* If all the candidate points on a given circumference sector result in contact with an obstacle, then the representative points of that sector are not selected.
+* According to the aforementioned process, the base position of the robot is planned based on the position of the requested goods.
+
+
+
+
+# 5.3. Movement path planning - Path planning for robots
+* Path planning for robots that serve in a general living environment requires a high degree of safety, which can be achieved by lowering the probability of contact with persons.
+* However, for robots that push wagons, the parameter space that uniquely defines this state has a maximum of six dimensions, that is, position (x,y) and posture (θ) of a robot and a wagon, and planning a path that represents the highest safety values in such a space is time consuming.
+
+
+# 5.3. Movement path planning - Path planning for robots
+* Thus, we require a method that produces a trajectory with a high degree of safety, but at the same time requires a short processing time. As such, we use a Voronoi map, as shown in Fig. 18.
+![opt](./images2/fig18.png){:width="50%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* In order to be able to plan for wagons in real time, we need to reduce the dimensions of the path search space.
+* The parameters that uniquely describe the state of a wagon pushing robot can have a maximum of six dimensions, but in reality the range in which the robot can operate the wagon is more limited.
+
+# 5.3. Movement path planning - Path planning for wagons
+* We set up a control point, as shown in Fig. 19, which fixes the relative positional relationship of the robot with the control point.
+![opt](./images2/fig19.png){:width="90%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* The operation of the robot is assumed to change in terms of the relative orientation (Wθ) of the wagon with respect to the robot.
+* The range of relative positions is also limited.
+* Accordingly, wagon-pushing robots are presented in just four dimensions, which shortens the search time for the wagon path planning.
+
+# 5.3. Movement path planning - Path planning for wagons
+* Path planning for wagon-pushing robots uses the above-mentioned basic path and is executed as follows:
+1. The start and end points are established.
+2. The path for each robot along the basic path is planned.
+3. According to each point on the path estimated in step 2, the position of the wagon control point is determined considering the manner in which the position of the wagon control point fits the relationship with the robot position.
+4.
+# 5.3. Movement path planning - Path planning for wagons
+4. If the wagon control point is not on the basic path (Fig. 20a), posture (Rθ) of the robot is changed so that the wagon control point passes along the basic path.
+5. If the head of the wagon is not on the basic path (Fig. 20b), the relative posture (Wθ) of the wagon is modified so that it passes along the basic path.
+6. Steps 3 through 5 are repeated until the end point is reached
+![opt](./images2/fig20.png){:width="50%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* Fig. 21 shows the results of wagon path planning, using example start and end points.
+![opt](./images2/fig21.png){:width="70%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* Using this procedure we can simplify the space search without sacrificing the safety of the basic path diagram.
+* The actual time required to calculate the path of a single robot was 1.10 (ms).
+* the time including the wagon path planning was 6.41 (ms).
+
+# 5.4. Integration of planning
+* We perform operation planning for overall item-carrying action, which integrates position, path and arm motion planning.
+1. Perform wagon grip position planning in order for the robot to grasp a wagon loaded with goods.
+2. Perform position planning for goods delivery. The results of these work position planning tasks becomes the candidate movement target positions for the path planning of the robot and the wagon.
+3. Perform an action planning that combines the above-mentioned planning tasks, from the initial position of the robot to the path the robot takes until grasping the wagon, and the path the wagon takes until the robot reaches the position at which the robot can deliver the goods.
+
+# 5.4. Integration of planning
+* For example
+if there are four candidate positions for wagon gripping and four candidate positions for goods delivery around the target person,
+then we can plan 16 different actions, as shown in Fig. 22. The various action sequences obtained from this procedure are then evaluated to choose the optimum sequence.
+![opt](./images2/fig22.png){:width="70%"}
+
+# 5.5. Evaluation of efficiency and safety
+* We evaluate each candidate action sequence based on efficiency and safety, as shown in Eq. (8).
+* The α,β,γ are respectively the weight values of Length, Rotation and ViewRatio.
+* The Length and Rotation represent the total distance traveled and total rotation angle
+* The Len-min and Rot-min represent the minimum values of all the candidate action.
+* First and second terms of Eq. (8) are the metrics for efficiency of action.
+* ViewRatio is the number of motion planning points in the person’s visual field out of total number of motion planning point.
+![opt](./images2/eq8.png){:width="100%"}
+
+# 6. Experiments
+* We present the results of fundamental experiments described below using an actual robot and the proposed ROS–TMS.
+1. Experiment to detect changes in the environment
+2. Experiment to examine gripping and delivery of goods
+3. Simulation of robot motion planning
+4. Service experiments
+5. Verification of modularity and scalability
+
+# 6.1. Experiment to detect changes in the environment
+* We conducted experiments to detect changes using ODS (Section  4.3) with various pieces of furniture.
+* We consider six pieces of target furniture, including two tables, two shelves, one chair, and one bed.
+* For each piece of furniture, we prepared 10 sets of previously stored data and newly acquired data of kinds of goods including books, snacks, cups, etc., and performed point change detection separately for each set.
+
+# 6.1. Experiment to detect changes in the environment
+* As the evaluation method, we considered the ratio of change detection with respect to the number of objects that were changed (change detection ratio).
+* We also considered over-detection, which occurs when the system detects a change that has actually not occurred.
+
+# 6.1. Experiment to detect changes in the environment
+* The change detection ratios for each furniture type are as follows: 93.3% for tables, 93.4% for shelves, 84.6% for chairs, and 91.3% for beds.
+![opt](./images2/table3.png){:width="100%"}
+
+# 6.1. Experiment to detect changes in the environment
+* The sections enclosed by circles in each image represent points that actually underwent changes.
+![opt](./images2/fig23.png){:width="100%"}
+
+# 6.2. Experiment to examine gripping and delivery of goods
+* We performed an operation experiment in which a robot grasps an object located on a wagon and delivers the object to a person.
+* As a prerequisite for this service, the goods are assumed to have been placed on the wagon, and their positions are known in advance.
+* After performing the experiment 10 times, the robot successfully grabbed and delivered the object in all cases.
+![opt](./images2/fig24.png){:width="100%"}
+
+# 6.2. Experiment to examine gripping and delivery of goods
+* We measured the displacement of the position of goods (Ox or Oy in Fig. 25) and the linear distance (d) between the measured value and the true value at the time of delivery,to verify the effect of rotation errors or arm posture errors.
+
+![opt](./images2/fig25.png){:width="50%"}
+![right](./images2/table4.png){:width="90%"}
+
+# 6.2. Experiment to examine gripping and delivery of goods
+* The distance error of the position of the goods at the time of delivery was 35.8 mm.
+* According to the manipulability degree, it is possible to cope with these errors, because the system plans a delivery posture with some extra margin in which persons and robots can move their hands.
+
+# 6.3. Simulation of robot motion planning
+* We set up one initial position for the robot (Rx,Ry,Rθ)=(1000mm,1000mm, 0°) , the wagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°) , and the target person  (Hx,Hy,Hθ)=(1400mm,2500mm, -90°) and assume the person is in a sitting state.
+* the range of vision of this person is shown in Fig. 26b by the red area.
+![opt](./images2/fig26.png){:width="90%"}
+
+# 6.3. Simulation of robot motion planning
+* The action planning result that passes over wagon grip candidate 1
+![opt](./images2/fig27.png){:width="90%"}
+
+# 6.3. Simulation of robot motion planning
+* The action planning result that passes over wagon grip candidate 2
+![opt](./images2/fig28.png){:width="90%"}
+
+# 6.3. Simulation of robot motion planning
+* Furthermore, the evaluation values that changed the weight of each evaluation for each planning result are listed in Table 5, Table 6 and Table 7.
+
+![right](./images2/table5.png){:width="50%"}
+![right](./images2/table6.png){:width="50%"}
+![right](./images2/table7.png){:width="70%"}
+
+# 6.3. Simulation of robot motion planning
+* The actions of Plan 2–3 were the most highly evaluated (Table 5).
+* Fig. 28a and d indicate that all of the actions occur within the field of vision of the person.
+* Since the target person can monitor the robot’s actions at all times, the risk of the robot unexpectedly touching a person is lower, and if the robot misses an action, the situation can be dealt with immediately.
+* The action plan chosen from the above results according to the proposed evaluation values exhibits both efficiency and high safety.
+
+# 6.4. Service experiments
+We performed a service experiment for the carriage of goods, in accordance with the combined results of these planning sequences. The state of the sequence of actions is shown in Fig. 29.
+![right](./images2/fig29.png){:width="100%"}
+
+# 6.4. Service experiments
+* This service was carried out successfully, avoiding any contact with the environment.
+* The total time for the task execution is 312 sec in case the maximum velocity of SmartPal-V is limited to 10 mm/sec in terms of safety.
+* The robot position was confirmed to always be within the range of vision of the subject during execution.
+* Accordingly, we can say that the planned actions had an appropriate level of safety.
+
+# 6.4. Service experiments
+*  There was a margin for the movement of hands, as shown in Fig. 29f, for which the delivery process could appropriately cope with the movement errors of the robot.
+* In reality, the maximum error from desired trajectory was about 0.092 m in the experiments.
+
+# 6.5. Verification of modularity and scalability
+* We built the ROS–TMS for three types of rooms to verify its high modularity and scalability.
+* Thanks to high flexibility and scalability of the ROS–TMS, we could set up these various environments in a comparatively short time.
+
+![right](./images2/fig30.png){:width="100%"}
+![right](./images2/fig31.png){:width="100%"}
+
+# 7. Conclusions
+* In the present paper, we have introduced a service robot system with an informationally structured environment named ROS–TMS that is designed to support daily activities of elderly individuals.
+*  The room considered herein contains several sensors to monitor the environment and a person.
+* The person is assisted by a humanoid robot that uses information about the environment to support various activities.
+
+# 7. Conclusions
+* In the present study, we concentrated on detection and fetch-and-give tasks, which we believe will be among most commonly requested tasks by the elderly in their daily lives.
+* We have presented the various subsystems that are necessary for completing this task and have conducted several independent short-term experiments to demonstrate the suitability of these subsystems, such as a detection task using a sensing system and a fetch-and-give task using a robot motion planning system of the ROS–TMS.
+
+# 7. Conclusions
+* Currently, we adopt a deterministic approach for choosing proper data from redundant sensory information based on the reliability pre-defined manually.
+* Our future work will include the extension to the probabilistic approach for fusing redundant sensory information.
+* Also, we intend to design and prepare a long-term experiment in which we can test the complete system for a longer period of time