comparison slide.html @ 10:3bee23948f70

nozomi-finish
author Nozomi Teruya <e125769@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 10:06:35 +0900
parents 8b5af40f3a04
children
comparison
equal deleted inserted replaced
8:8b5af40f3a04 10:3bee23948f70
2 <html> 2 <html>
3 <head> 3 <head>
4 <meta http-equiv="content-type" content="text/html;charset=utf-8"> 4 <meta http-equiv="content-type" content="text/html;charset=utf-8">
5 <title>Service robot system with an informationally structured environment</title> 5 <title>Service robot system with an informationally structured environment</title>
6 6
7 <meta name="generator" content="Slide Show (S9) v2.5.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]"> 7 <meta name="generator" content="Slide Show (S9) v2.5.0 on Ruby 2.1.0 (2013-12-25) [x86_64-darwin13.0]">
8 <meta name="author" content="Tatsuki IHA, Nozomi TERUYA" > 8 <meta name="author" content="Tatsuki IHA, Nozomi TERUYA" >
9 9
10 <!-- style sheet links --> 10 <!-- style sheet links -->
11 <link rel="stylesheet" href="s6/themes/projection.css" media="screen,projection"> 11 <link rel="stylesheet" href="s6/themes/projection.css" media="screen,projection">
12 <link rel="stylesheet" href="s6/themes/screen.css" media="screen"> 12 <link rel="stylesheet" href="s6/themes/screen.css" media="screen">
84 </div> 84 </div>
85 85
86 <div class='slide '> 86 <div class='slide '>
87 <!-- === begin markdown block === 87 <!-- === begin markdown block ===
88 88
89 generated by markdown/1.2.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15] 89 generated by markdown/1.2.0 on Ruby 2.1.0 (2013-12-25) [x86_64-darwin13.0]
90 on 2016-06-03 05:16:01 +0900 with Markdown engine kramdown (1.11.1) 90 on 2016-06-03 10:05:59 +0900 with Markdown engine kramdown (1.5.0)
91 using options {} 91 using options {}
92 --> 92 -->
93 93
94 <!-- _S9SLIDE_ --> 94 <!-- _S9SLIDE_ -->
95 <h1 id="introduction">1. Introduction</h1> 95 <h1 id="introduction">1. Introduction</h1>
451 <!-- _S9SLIDE_ --> 451 <!-- _S9SLIDE_ -->
452 <h1 id="overview-of-the-ros-tms-3">3. Overview of the ROS-TMS</h1> 452 <h1 id="overview-of-the-ros-tms-3">3. Overview of the ROS-TMS</h1>
453 <ul> 453 <ul>
454 <li>the following functions are implemented in the ROS-TMS 454 <li>the following functions are implemented in the ROS-TMS
455 <ol> 455 <ol>
456 <li>Communication with sensors, robots, and databases</li> 456 <li>Communication with sensors, robots, and databases </li>
457 <li>Storage,revision,backup,and retrieval of real-time information in an environment</li> 457 <li>Storage,revision,backup,and retrieval of real-time information in an environment</li>
458 <li>Maintenance and providing information according to individual IDs assigned to each object and robot</li> 458 <li>Maintenance and providing information according to individual IDs assigned to each object and robot</li>
459 <li>Notification of the occurrence of particular predefined events, such as accidents</li> 459 <li>Notification of the occurrence of particular predefined events, such as accidents</li>
460 <li>Task schedule function for multiple robots and sensors</li> 460 <li>Task schedule function for multiple robots and sensors</li>
461 <li>Human-system interaction for user requests</li> 461 <li>Human-system interaction for user requests</li>
816 <h1 id="comparison-with-the-stored-infomation-5">4.3.5. Comparison with the stored infomation</h1> 816 <h1 id="comparison-with-the-stored-infomation-5">4.3.5. Comparison with the stored infomation</h1>
817 817
818 <div style="text-align: center;"> 818 <div style="text-align: center;">
819 <img src="./images/fig12.svg" alt="message" width="600" /> 819 <img src="./images/fig12.svg" alt="message" width="600" />
820 </div> 820 </div>
821
822
823 </div>
824 <div class='slide '>
825 <!-- _S9SLIDE_ -->
826 <h1 id="robot-motion-planning">5. Robot motion planning</h1>
827 <ul>
828 <li>Robot motion planning (TMS_RP) is the component of the ROS–TMS that calculates the movement path of the robot and the trajectories of the robot arm for moving, giving, and avoiding obstacles based on information acquired from TMS_SS</li>
829 <li>We consider the necessary planning to implement services such as fetch-and-give tasks because such tasks are among the most frequent tasks required by elderly individuals in daily life.</li>
830 </ul>
831
832
833 </div>
834 <div class='slide '>
835 <!-- _S9SLIDE_ -->
836 <h1 id="robot-motion-planning-1">5. Robot motion planning</h1>
837 <ul>
838 <li>Robot motion planning includes wagons for services that can carry and deliver a large amount of objects, for example, at tea time or handing out towels to residents in elderly care facilities as shown in Fig. 14a<br />
839 <img src="./images2/fig14.png" alt="opt" width="100%" /></li>
840 </ul>
841
842
843 </div>
844 <div class='slide '>
845 <!-- _S9SLIDE_ -->
846 <h1 id="robot-motion-planning-2">5. Robot motion planning</h1>
847 <ul>
848 <li>Robot motion planning consists of sub-planning, integration, and evaluation of the planning described below to implement the fetch-and-give task.<br />
849 <ol>
850 <li>Grasp planning to grip a wagon </li>
851 <li>Position planning for goods delivery </li>
852 <li>Movement path planning </li>
853 <li>Path planning for wagons </li>
854 <li>Integration of planning </li>
855 <li>Evaluation of efficiency and safety </li>
856 </ol>
857 </li>
858 <li>Each planning, integration, and evaluation process uses environment data obtained from TMS_DB and TMS_SS.</li>
859 </ul>
860
861
862 </div>
863 <div class='slide '>
864 <!-- _S9SLIDE_ -->
865 <h1 id="grasp-planning-to-grip-a-wagon">5.1. Grasp planning to grip a wagon</h1>
866 <ul>
867 <li>In order for a robot to push a wagon, the robot needs to grasp the wagon at first.</li>
868 <li>a robot can push a wagon in a stable manner if the robot grasps the wagon from two poles positioned on its sides.</li>
869 <li>Thus, the number of base position options for the robot with respect to the wagon is reduced to four (i) as shown in Fig. 14.
870 <img src="./images2/fig14.png" alt="opt" width="100%" /></li>
871 </ul>
872
873
874 </div>
875 <div class='slide '>
876 <!-- _S9SLIDE_ -->
877 <h1 id="grasp-planning-to-grip-a-wagon-1">5.1. Grasp planning to grip a wagon</h1>
878 <ul>
879 <li>The position and orientation of the wagon, as well as its size, is managed using the ROS–TMS database. Using this information, it is possible to determine the correct relative position.</li>
880 <li>Based on the wagon direction when the robot is grasping its long side, valid candidate points can be determined using Eqs.</li>
881 </ul>
882
883
884 </div>
885 <div class='slide '>
886 <!-- _S9SLIDE_ -->
887 <h1 id="grasp-planning-to-grip-a-wagon-2">5.1. Grasp planning to grip a wagon</h1>
888 <ul>
889 <li>Eq. (2) through (4) below (i=0,1,2,3). Here, R represents the robot, and W represents the wagon. Subscripts x, y, and θ represent the corresponding x-coordinate, y-coordinate, and posture (rotation of the z-axis).
890 <img src="./images2/eq234.png" alt="opt" width="100%" /></li>
891 </ul>
892
893
894 </div>
895 <div class='slide '>
896 <!-- _S9SLIDE_ -->
897 <h1 id="grasp-planning-to-grip-a-wagon-3">5.1. Grasp planning to grip a wagon</h1>
898 <ul>
899 <li>Fig. 13 shows the positional relationship between the robot and the wagon, given i=2.
900 <img src="./images2/fig13.png" alt="opt" width="90%" /></li>
901 </ul>
902
903
904 </div>
905 <div class='slide '>
906 <!-- _S9SLIDE_ -->
907 <h1 id="position-planning-for-goods-delivery">5.2. Position planning for goods delivery</h1>
908 <ul>
909 <li>In order to hand over goods to a person, it is necessary to plan both the position of the goods to be delivered and the base position of the robot according to the person’s position.</li>
910 <li>Using manipulability as an indicator for this planning, the system plans the position of the goods relative to the base position.</li>
911 <li>Manipulability is represented by the degree to which hands/fingers can move when each joint angle is changed.</li>
912 </ul>
913
914
915 </div>
916 <div class='slide '>
917 <!-- _S9SLIDE_ -->
918 <h1 id="position-planning-for-goods-delivery-1">5.2. Position planning for goods delivery</h1>
919 <ul>
920 <li>When trying to deliver goods in postures with high manipulability, it is easier to modify the motion, even when small gaps exist between the robot and the person.</li>
921 <li>We assume the high manipulability of the arm of the person makes him more comfortable for grasping goods. Their relation is represented in Eqs. (5) and (6).</li>
922 <li>The velocity vector V corresponds to the position of hands, and Q is the joint angle vector.
923 <img src="./images2/eq56.png" alt="opt" width="100%" /></li>
924 </ul>
925
926
927 </div>
928 <div class='slide '>
929 <!-- _S9SLIDE_ -->
930 <h1 id="position-planning-for-goods-delivery-2">5.2. Position planning for goods delivery</h1>
931 <ul>
932 <li>If the arm has a redundant degree of freedom, an infinite number of joint angle vectors corresponds to just one hand position.</li>
933 <li>When solving this issue, we calculate the posture that represents the highest manipulability within the range of possible joint angle movements.</li>
934 </ul>
935
936
937 </div>
938 <div class='slide '>
939 <!-- _S9SLIDE_ -->
940 <h1 id="position-planning-for-goods-delivery-3">5.2. Position planning for goods delivery</h1>
941 <ul>
942 <li>The planning procedure for the position of goods and the position of robots using manipulability is as follows:
943 <ol>
944 <li>The system maps the manipulability that corresponds to the robots and each person on the local coordinate system.</li>
945 <li>Both manipulability maps are integrated, and the position of goods is determined.</li>
946 <li>Based on the position of goods, the base position of the robot is determined.</li>
947 </ol>
948 </li>
949 <li>We set the robot as the origin of the robot coordinate system, assuming the frontal direction as the x-axis and the lateral direction as the y-axis.</li>
950 </ul>
951
952
953 </div>
954 <div class='slide '>
955 <!-- _S9SLIDE_ -->
956 <h1 id="position-planning-for-goods-delivery-4">5.2. Position planning for goods delivery</h1>
957 <ul>
958 <li>This mapping is superimposed along the z-axis, which is the height direction, as shown in Fig. 15b.
959 <img src="./images2/fig15.png" alt="opt" width="80%" /></li>
960 </ul>
961
962
963 </div>
964 <div class='slide '>
965 <!-- _S9SLIDE_ -->
966 <h1 id="position-planning-for-goods-delivery-5">5.2. Position planning for goods delivery</h1>
967 <ul>
968 <li>The next step is to determine, using the manipulability map, the position of the goods that are about to be delivered.</li>
969 <li>As shown in Fig. 16a, we take the maximum manipulability value according to each height, and retain the XY coordinates of each local coordinate system.</li>
970 <li>These coordinates represent the relationship between the base position and the positions of the hands.
971 <img src="./images2/fig16.png" alt="opt" width="80%" /></li>
972 </ul>
973
974
975 </div>
976 <div class='slide '>
977 <!-- _S9SLIDE_ -->
978 <h1 id="position-planning-for-goods-delivery-6">5.2. Position planning for goods delivery</h1>
979 <ul>
980 <li>According to the calculated height on the manipulability map for a person, the system requests the absolute coordinates of the goods to be delivered, using the previously retained relative coordinates of the hands.</li>
981 <li>The position of the person that will receive the delivered goods is managed through TMS_SS and TMS_DB, and it is also possible to use this position as a reference point to request the position of the goods by fitting the relative coordinates.</li>
982 <li>According to the aforementioned procedure, we can determine the unique position of the goods that are about to be delivered.</li>
983 </ul>
984
985
986 </div>
987 <div class='slide '>
988 <!-- _S9SLIDE_ -->
989 <h1 id="position-planning-for-goods-delivery-7">5.2. Position planning for goods delivery</h1>
990 <ul>
991 <li>As the final step, the base position of the robot is determined in order to hold out the goods to their previously calculated position.</li>
992 <li>According to the manipulability map that corresponds to the height of a specific object, the system retrieves the relationship between the positions of hands and the base position.</li>
993 <li>Using the position of the object as a reference point, the robot is able to hold the object out to any determined position if the base position meets the criteria of this relationship.</li>
994 </ul>
995
996
997 </div>
998 <div class='slide '>
999 <!-- _S9SLIDE_ -->
1000 <h1 id="position-planning-for-goods-delivery-8">5.2. Position planning for goods delivery</h1>
1001 <ul>
1002 <li>Consequently, at the time of delivery, points on the circumference of the position of the object are determined to be candidate points on the absolute coordinate system of the base position.</li>
1003 <li>Considering all of the prospect points of the circumference, the following action planning, for which the system extracts multiple candidate points, is redundant.</li>
1004 <li>The best approach is to split the circumference n time, fetch a representative point out of each sector after the split, and limit the number of candidate points.</li>
1005 </ul>
1006
1007
1008 </div>
1009 <div class='slide '>
1010 <!-- _S9SLIDE_ -->
1011 <h1 id="position-planning-for-goods-delivery-9">5.2. Position planning for goods delivery</h1>
1012 <ul>
1013 <li>After that, the obtained representative points are evaluated as in Eq. (7), while placing special emphasis on safety.</li>
1014 <li>Here, View is a Boolean value that represents whether the robot enters the field of vision of the target person. If it is inside the field of vision, then View is 1, otherwise View is 0.</li>
1015 <li>This calculation is necessary because if the robot can enter the field of vision of the target person, then the robot can be operated more easily and the risk of unexpected contact with the robot is also reduced.</li>
1016 <li>Dhuman represents the distance to the target person, and Dobs represents the distance to the nearest obstacle.
1017 <img src="./images2/eq7.png" alt="opt" width="80%" /></li>
1018 </ul>
1019
1020
1021 </div>
1022 <div class='slide '>
1023 <!-- _S9SLIDE_ -->
1024 <h1 id="position-planning-for-goods-delivery-10">5.2. Position planning for goods delivery</h1>
1025 <ul>
1026 <li>In order to reduce the risk of contact with the target person or an obstacle, the positions that repre</li>
1027 <li>If all the candidate points on a given circumference sector result in contact with an obstacle, then the representative points of that sector are not selected.</li>
1028 <li>According to the aforementioned process, the base position of the robot is planned based on the position of the requested goods.</li>
1029 </ul>
1030
1031
1032 </div>
1033 <div class='slide '>
1034 <!-- _S9SLIDE_ -->
1035 <h1 id="movement-path-planning---path-planning-for-robots">5.3. Movement path planning - Path planning for robots</h1>
1036 <ul>
1037 <li>Path planning for robots that serve in a general living environment requires a high degree of safety, which can be achieved by lowering the probability of contact with persons.</li>
1038 <li>However, for robots that push wagons, the parameter space that uniquely defines this state has a maximum of six dimensions, that is, position (x,y) and posture (θ) of a robot and a wagon, and planning a path that represents the highest safety values in such a space is time consuming.</li>
1039 </ul>
1040
1041
1042 </div>
1043 <div class='slide '>
1044 <!-- _S9SLIDE_ -->
1045 <h1 id="movement-path-planning---path-planning-for-robots-1">5.3. Movement path planning - Path planning for robots</h1>
1046 <ul>
1047 <li>Thus, we require a method that produces a trajectory with a high degree of safety, but at the same time requires a short processing time. As such, we use a Voronoi map, as shown in Fig. 18.
1048 <img src="./images2/fig18.png" alt="opt" width="50%" /></li>
1049 </ul>
1050
1051
1052 </div>
1053 <div class='slide '>
1054 <!-- _S9SLIDE_ -->
1055 <h1 id="movement-path-planning---path-planning-for-wagons">5.3. Movement path planning - Path planning for wagons</h1>
1056 <ul>
1057 <li>In order to be able to plan for wagons in real time, we need to reduce the dimensions of the path search space.</li>
1058 <li>The parameters that uniquely describe the state of a wagon pushing robot can have a maximum of six dimensions, but in reality the range in which the robot can operate the wagon is more limited.</li>
1059 </ul>
1060
1061
1062 </div>
1063 <div class='slide '>
1064 <!-- _S9SLIDE_ -->
1065 <h1 id="movement-path-planning---path-planning-for-wagons-1">5.3. Movement path planning - Path planning for wagons</h1>
1066 <ul>
1067 <li>We set up a control point, as shown in Fig. 19, which fixes the relative positional relationship of the robot with the control point.
1068 <img src="./images2/fig19.png" alt="opt" width="90%" /></li>
1069 </ul>
1070
1071
1072 </div>
1073 <div class='slide '>
1074 <!-- _S9SLIDE_ -->
1075 <h1 id="movement-path-planning---path-planning-for-wagons-2">5.3. Movement path planning - Path planning for wagons</h1>
1076 <ul>
1077 <li>The operation of the robot is assumed to change in terms of the relative orientation (Wθ) of the wagon with respect to the robot.</li>
1078 <li>The range of relative positions is also limited.</li>
1079 <li>Accordingly, wagon-pushing robots are presented in just four dimensions, which shortens the search time for the wagon path planning.</li>
1080 </ul>
1081
1082
1083 </div>
1084 <div class='slide '>
1085 <!-- _S9SLIDE_ -->
1086 <h1 id="movement-path-planning---path-planning-for-wagons-3">5.3. Movement path planning - Path planning for wagons</h1>
1087 <ul>
1088 <li>Path planning for wagon-pushing robots uses the above-mentioned basic path and is executed as follows:
1089 <ol>
1090 <li>The start and end points are established.</li>
1091 <li>The path for each robot along the basic path is planned.</li>
1092 <li>According to each point on the path estimated in step 2, the position of the wagon control point is determined considering the manner in which the position of the wagon control point fits the relationship with the robot position.
1093 4.
1094 # 5.3. Movement path planning - Path planning for wagons</li>
1095 <li>If the wagon control point is not on the basic path (Fig. 20a), posture (Rθ) of the robot is changed so that the wagon control point passes along the basic path.</li>
1096 <li>If the head of the wagon is not on the basic path (Fig. 20b), the relative posture (Wθ) of the wagon is modified so that it passes along the basic path.</li>
1097 <li>Steps 3 through 5 are repeated until the end point is reached
1098 <img src="./images2/fig20.png" alt="opt" width="50%" /></li>
1099 </ol>
1100 </li>
1101 </ul>
1102
1103
1104 </div>
1105 <div class='slide '>
1106 <!-- _S9SLIDE_ -->
1107 <h1 id="movement-path-planning---path-planning-for-wagons-4">5.3. Movement path planning - Path planning for wagons</h1>
1108 <ul>
1109 <li>Fig. 21 shows the results of wagon path planning, using example start and end points.
1110 <img src="./images2/fig21.png" alt="opt" width="70%" /></li>
1111 </ul>
1112
1113
1114 </div>
1115 <div class='slide '>
1116 <!-- _S9SLIDE_ -->
1117 <h1 id="movement-path-planning---path-planning-for-wagons-5">5.3. Movement path planning - Path planning for wagons</h1>
1118 <ul>
1119 <li>Using this procedure we can simplify the space search without sacrificing the safety of the basic path diagram.</li>
1120 <li>The actual time required to calculate the path of a single robot was 1.10 (ms).</li>
1121 <li>the time including the wagon path planning was 6.41 (ms).</li>
1122 </ul>
1123
1124
1125 </div>
1126 <div class='slide '>
1127 <!-- _S9SLIDE_ -->
1128 <h1 id="integration-of-planning">5.4. Integration of planning</h1>
1129 <ul>
1130 <li>We perform operation planning for overall item-carrying action, which integrates position, path and arm motion planning.
1131 <ol>
1132 <li>Perform wagon grip position planning in order for the robot to grasp a wagon loaded with goods.</li>
1133 <li>Perform position planning for goods delivery. The results of these work position planning tasks becomes the candidate movement target positions for the path planning of the robot and the wagon.</li>
1134 <li>Perform an action planning that combines the above-mentioned planning tasks, from the initial position of the robot to the path the robot takes until grasping the wagon, and the path the wagon takes until the robot reaches the position at which the robot can deliver the goods.</li>
1135 </ol>
1136 </li>
1137 </ul>
1138
1139
1140 </div>
1141 <div class='slide '>
1142 <!-- _S9SLIDE_ -->
1143 <h1 id="integration-of-planning-1">5.4. Integration of planning</h1>
1144 <ul>
1145 <li>For example
1146 if there are four candidate positions for wagon gripping and four candidate positions for goods delivery around the target person,
1147 then we can plan 16 different actions, as shown in Fig. 22. The various action sequences obtained from this procedure are then evaluated to choose the optimum sequence.
1148 <img src="./images2/fig22.png" alt="opt" width="70%" /></li>
1149 </ul>
1150
1151
1152 </div>
1153 <div class='slide '>
1154 <!-- _S9SLIDE_ -->
1155 <h1 id="evaluation-of-efficiency-and-safety">5.5. Evaluation of efficiency and safety</h1>
1156 <ul>
1157 <li>We evaluate each candidate action sequence based on efficiency and safety, as shown in Eq. (8).</li>
1158 <li>The α,β,γ are respectively the weight values of Length, Rotation and ViewRatio.</li>
1159 <li>The Length and Rotation represent the total distance traveled and total rotation angle</li>
1160 <li>The Len-min and Rot-min represent the minimum values of all the candidate action.</li>
1161 <li>First and second terms of Eq. (8) are the metrics for efficiency of action.</li>
1162 <li>ViewRatio is the number of motion planning points in the person’s visual field out of total number of motion planning point.
1163 <img src="./images2/eq8.png" alt="opt" width="100%" /></li>
1164 </ul>
1165
1166
1167 </div>
1168 <div class='slide '>
1169 <!-- _S9SLIDE_ -->
1170 <h1 id="experiments">6. Experiments</h1>
1171 <ul>
1172 <li>We present the results of fundamental experiments described below using an actual robot and the proposed ROS–TMS.
1173 <ol>
1174 <li>Experiment to detect changes in the environment</li>
1175 <li>Experiment to examine gripping and delivery of goods</li>
1176 <li>Simulation of robot motion planning</li>
1177 <li>Service experiments</li>
1178 <li>Verification of modularity and scalability</li>
1179 </ol>
1180 </li>
1181 </ul>
1182
1183
1184 </div>
1185 <div class='slide '>
1186 <!-- _S9SLIDE_ -->
1187 <h1 id="experiment-to-detect-changes-in-the-environment">6.1. Experiment to detect changes in the environment</h1>
1188 <ul>
1189 <li>We conducted experiments to detect changes using ODS (Section 4.3) with various pieces of furniture.</li>
1190 <li>We consider six pieces of target furniture, including two tables, two shelves, one chair, and one bed.</li>
1191 <li>For each piece of furniture, we prepared 10 sets of previously stored data and newly acquired data of kinds of goods including books, snacks, cups, etc., and performed point change detection separately for each set.</li>
1192 </ul>
1193
1194
1195 </div>
1196 <div class='slide '>
1197 <!-- _S9SLIDE_ -->
1198 <h1 id="experiment-to-detect-changes-in-the-environment-1">6.1. Experiment to detect changes in the environment</h1>
1199 <ul>
1200 <li>As the evaluation method, we considered the ratio of change detection with respect to the number of objects that were changed (change detection ratio).</li>
1201 <li>We also considered over-detection, which occurs when the system detects a change that has actually not occurred.</li>
1202 </ul>
1203
1204
1205 </div>
1206 <div class='slide '>
1207 <!-- _S9SLIDE_ -->
1208 <h1 id="experiment-to-detect-changes-in-the-environment-2">6.1. Experiment to detect changes in the environment</h1>
1209 <ul>
1210 <li>The change detection ratios for each furniture type are as follows: 93.3% for tables, 93.4% for shelves, 84.6% for chairs, and 91.3% for beds.
1211 <img src="./images2/table3.png" alt="opt" width="100%" /></li>
1212 </ul>
1213
1214
1215 </div>
1216 <div class='slide '>
1217 <!-- _S9SLIDE_ -->
1218 <h1 id="experiment-to-detect-changes-in-the-environment-3">6.1. Experiment to detect changes in the environment</h1>
1219 <ul>
1220 <li>The sections enclosed by circles in each image represent points that actually underwent changes.
1221 <img src="./images2/fig23.png" alt="opt" width="100%" /></li>
1222 </ul>
1223
1224
1225 </div>
1226 <div class='slide '>
1227 <!-- _S9SLIDE_ -->
1228 <h1 id="experiment-to-examine-gripping-and-delivery-of-goods">6.2. Experiment to examine gripping and delivery of goods</h1>
1229 <ul>
1230 <li>We performed an operation experiment in which a robot grasps an object located on a wagon and delivers the object to a person.</li>
1231 <li>As a prerequisite for this service, the goods are assumed to have been placed on the wagon, and their positions are known in advance.</li>
1232 <li>After performing the experiment 10 times, the robot successfully grabbed and delivered the object in all cases.
1233 <img src="./images2/fig24.png" alt="opt" width="100%" /></li>
1234 </ul>
1235
1236
1237 </div>
1238 <div class='slide '>
1239 <!-- _S9SLIDE_ -->
1240 <h1 id="experiment-to-examine-gripping-and-delivery-of-goods-1">6.2. Experiment to examine gripping and delivery of goods</h1>
1241 <ul>
1242 <li>We measured the displacement of the position of goods (Ox or Oy in Fig. 25) and the linear distance (d) between the measured value and the true value at the time of delivery,to verify the effect of rotation errors or arm posture errors.</li>
1243 </ul>
1244
1245 <p><img src="./images2/fig25.png" alt="opt" width="50%" />
1246 <img src="./images2/table4.png" alt="right" width="90%" /></p>
1247
1248
1249 </div>
1250 <div class='slide '>
1251 <!-- _S9SLIDE_ -->
1252 <h1 id="experiment-to-examine-gripping-and-delivery-of-goods-2">6.2. Experiment to examine gripping and delivery of goods</h1>
1253 <ul>
1254 <li>The distance error of the position of the goods at the time of delivery was 35.8 mm.</li>
1255 <li>According to the manipulability degree, it is possible to cope with these errors, because the system plans a delivery posture with some extra margin in which persons and robots can move their hands.</li>
1256 </ul>
1257
1258
1259 </div>
1260 <div class='slide '>
1261 <!-- _S9SLIDE_ -->
1262 <h1 id="simulation-of-robot-motion-planning">6.3. Simulation of robot motion planning</h1>
1263 <ul>
1264 <li>We set up one initial position for the robot (Rx,Ry,Rθ)=(1000mm,1000mm, 0°) , the wagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°) , and the target person (Hx,Hy,Hθ)=(1400mm,2500mm, -90°) and assume the person is in a sitting state.</li>
1265 <li>the range of vision of this person is shown in Fig. 26b by the red area.
1266 <img src="./images2/fig26.png" alt="opt" width="90%" /></li>
1267 </ul>
1268
1269
1270 </div>
1271 <div class='slide '>
1272 <!-- _S9SLIDE_ -->
1273 <h1 id="simulation-of-robot-motion-planning-1">6.3. Simulation of robot motion planning</h1>
1274 <ul>
1275 <li>The action planning result that passes over wagon grip candidate 1
1276 <img src="./images2/fig27.png" alt="opt" width="90%" /></li>
1277 </ul>
1278
1279
1280 </div>
1281 <div class='slide '>
1282 <!-- _S9SLIDE_ -->
1283 <h1 id="simulation-of-robot-motion-planning-2">6.3. Simulation of robot motion planning</h1>
1284 <ul>
1285 <li>The action planning result that passes over wagon grip candidate 2
1286 <img src="./images2/fig28.png" alt="opt" width="90%" /></li>
1287 </ul>
1288
1289
1290 </div>
1291 <div class='slide '>
1292 <!-- _S9SLIDE_ -->
1293 <h1 id="simulation-of-robot-motion-planning-3">6.3. Simulation of robot motion planning</h1>
1294 <ul>
1295 <li>Furthermore, the evaluation values that changed the weight of each evaluation for each planning result are listed in Table 5, Table 6 and Table 7.</li>
1296 </ul>
1297
1298 <p><img src="./images2/table5.png" alt="right" width="50%" />
1299 <img src="./images2/table6.png" alt="right" width="50%" />
1300 <img src="./images2/table7.png" alt="right" width="70%" /></p>
1301
1302
1303 </div>
1304 <div class='slide '>
1305 <!-- _S9SLIDE_ -->
1306 <h1 id="simulation-of-robot-motion-planning-4">6.3. Simulation of robot motion planning</h1>
1307 <ul>
1308 <li>The actions of Plan 2–3 were the most highly evaluated (Table 5).</li>
1309 <li>Fig. 28a and d indicate that all of the actions occur within the field of vision of the person.</li>
1310 <li>Since the target person can monitor the robot’s actions at all times, the risk of the robot unexpectedly touching a person is lower, and if the robot misses an action, the situation can be dealt with immediately.</li>
1311 <li>The action plan chosen from the above results according to the proposed evaluation values exhibits both efficiency and high safety.</li>
1312 </ul>
1313
1314
1315 </div>
1316 <div class='slide '>
1317 <!-- _S9SLIDE_ -->
1318 <h1 id="service-experiments">6.4. Service experiments</h1>
1319 <p>We performed a service experiment for the carriage of goods, in accordance with the combined results of these planning sequences. The state of the sequence of actions is shown in Fig. 29.
1320 <img src="./images2/fig29.png" alt="right" width="100%" /></p>
1321
1322
1323 </div>
1324 <div class='slide '>
1325 <!-- _S9SLIDE_ -->
1326 <h1 id="service-experiments-1">6.4. Service experiments</h1>
1327 <ul>
1328 <li>This service was carried out successfully, avoiding any contact with the environment.</li>
1329 <li>The total time for the task execution is 312 sec in case the maximum velocity of SmartPal-V is limited to 10 mm/sec in terms of safety.</li>
1330 <li>The robot position was confirmed to always be within the range of vision of the subject during execution.</li>
1331 <li>Accordingly, we can say that the planned actions had an appropriate level of safety.</li>
1332 </ul>
1333
1334
1335 </div>
1336 <div class='slide '>
1337 <!-- _S9SLIDE_ -->
1338 <h1 id="service-experiments-2">6.4. Service experiments</h1>
1339 <ul>
1340 <li>There was a margin for the movement of hands, as shown in Fig. 29f, for which the delivery process could appropriately cope with the movement errors of the robot.</li>
1341 <li>In reality, the maximum error from desired trajectory was about 0.092 m in the experiments.</li>
1342 </ul>
1343
1344
1345 </div>
1346 <div class='slide '>
1347 <!-- _S9SLIDE_ -->
1348 <h1 id="verification-of-modularity-and-scalability">6.5. Verification of modularity and scalability</h1>
1349 <ul>
1350 <li>We built the ROS–TMS for three types of rooms to verify its high modularity and scalability.</li>
1351 <li>Thanks to high flexibility and scalability of the ROS–TMS, we could set up these various environments in a comparatively short time.</li>
1352 </ul>
1353
1354 <p><img src="./images2/fig30.png" alt="right" width="100%" />
1355 <img src="./images2/fig31.png" alt="right" width="100%" /></p>
1356
1357
1358 </div>
1359 <div class='slide '>
1360 <!-- _S9SLIDE_ -->
1361 <h1 id="conclusions">7. Conclusions</h1>
1362 <ul>
1363 <li>In the present paper, we have introduced a service robot system with an informationally structured environment named ROS–TMS that is designed to support daily activities of elderly individuals.</li>
1364 <li>The room considered herein contains several sensors to monitor the environment and a person.</li>
1365 <li>The person is assisted by a humanoid robot that uses information about the environment to support various activities.</li>
1366 </ul>
1367
1368
1369 </div>
1370 <div class='slide '>
1371 <!-- _S9SLIDE_ -->
1372 <h1 id="conclusions-1">7. Conclusions</h1>
1373 <ul>
1374 <li>In the present study, we concentrated on detection and fetch-and-give tasks, which we believe will be among most commonly requested tasks by the elderly in their daily lives.</li>
1375 <li>We have presented the various subsystems that are necessary for completing this task and have conducted several independent short-term experiments to demonstrate the suitability of these subsystems, such as a detection task using a sensing system and a fetch-and-give task using a robot motion planning system of the ROS–TMS.</li>
1376 </ul>
1377
1378
1379 </div>
1380 <div class='slide '>
1381 <!-- _S9SLIDE_ -->
1382 <h1 id="conclusions-2">7. Conclusions</h1>
1383 <ul>
1384 <li>Currently, we adopt a deterministic approach for choosing proper data from redundant sensory information based on the reliability pre-defined manually.</li>
1385 <li>Our future work will include the extension to the probabilistic approach for fusing redundant sensory information.</li>
1386 <li>Also, we intend to design and prepare a long-term experiment in which we can test the complete system for a longer period of time</li>
1387 </ul>
821 <!-- === end markdown block === --> 1388 <!-- === end markdown block === -->
822 </div> 1389 </div>
823 1390
824 1391
825 </div><!-- presentation --> 1392 </div><!-- presentation -->