comparison slide.md @ 2:44a72b1ed986

Fix
author Tatsuki IHA <e125716@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 04:06:43 +0900
parents f8ef341d5822
children 1c44003cf7cc
comparison
equal deleted inserted replaced
1:f8ef341d5822 2:44a72b1ed986
11 - in particular, daily life assistance for elderly individuals in hospitals and care facilities is one of the most urgent and promising applications for service robots 11 - in particular, daily life assistance for elderly individuals in hospitals and care facilities is one of the most urgent and promising applications for service robots
12 12
13 # 1. Introduction 13 # 1. Introduction
14 - for a service robot, information about its surrounding, such as the positions of objects, furniture, humans, and other robots is indispensable for safely performing proper service tasks 14 - for a service robot, information about its surrounding, such as the positions of objects, furniture, humans, and other robots is indispensable for safely performing proper service tasks
15 - however, current sensing technology, especially for cases of robots equipped with external sensors, is not good enough to complete these tasks satisfactorily 15 - however, current sensing technology, especially for cases of robots equipped with external sensors, is not good enough to complete these tasks satisfactorily
16 - for example, a vision system is susceptible to changes in lighting conditions and the appearances of objects. moreover, the field of vision is rather narrow. 16 - for example, a vision system is susceptible to changes in lighting conditions and the appearances of objects. moreover, the field of vision is rather narrow
17 17
18 # 1. Introduction 18 # 1. Introduction
19 - although occlusions can be partly solved by sensors on a mobile robot, background changes and unfavorable vibrations of a robot body make processes more difficult. 19 - although occlusions can be partly solved by sensors on a mobile robot, background changes and unfavorable vibrations of a robot body make processes more difficult
20 - in addition, the payload of a robot is not so high and computer resources are also limited. 20 - in addition, the payload of a robot is not so high and computer resources are also limited
21 21
22 # 1. Introduction 22 # 1. Introduction
23 - fixed sensors in an environment are more stable and can more easily gather information about the environment. 23 - fixed sensors in an environment are more stable and can more easily gather information about the environment
24 - if a sufficient number of sensors can be embedded in the environment in advance, occlusion is no longer a crucial problem. 24 - if a sufficient number of sensors can be embedded in the environment in advance, occlusion is no longer a crucial problem
25 - information required to perform tasks is acquired by distributed sensors and transmitted to a robot on demand. 25 - information required to perform tasks is acquired by distributed sensors and transmitted to a robot on demand
26 - the concept of making an environment smarter rather than the robot is referred to as an informationally structured environment. 26 - the concept of making an environment smarter rather than the robot is referred to as an informationally structured environment
27 27
28 # 1. Introduction 28 # 1. Introduction
29 - an informationally structured environment is a feasible solution for introducing service robots into our daily lives using current technology 29 - an informationally structured environment is a feasible solution for introducing service robots into our daily lives using current technology
30 - several systems that observe human behavior using distributed sensor systems and provide proper service tasks according to requests from human or emergency detection, which is triggered automatically, have been proposed 30 - several systems that observe human behavior using distributed sensor systems and provide proper service tasks according to requests from human or emergency detection, which is triggered automatically, have been proposed
31 - several service robots that act as companions to elderly people or as assistants to humans who require special care have been developed 31 - several service robots that act as companions to elderly people or as assistants to humans who require special care have been developed
32 32
33 # 1. Introduction 33 # 1. Introduction
34 - we also have been developing an informationally structured environment for assisting in the daily life of elderly people in our research project, i.e., the robot town project 34 - we also have been developing an informationally structured environment for assisting in the daily life of elderly people in our research project, i.e., the robot town project
35 - the goal of this project is to develop a distributed sensor network system covering a townsize environment consisting of several houses, buildings, and roads, and to manage robot services appropriately by monitoring events that occur in the environment. 35 - the goal of this project is to develop a distributed sensor network system covering a townsize environment consisting of several houses, buildings, and roads, and to manage robot services appropriately by monitoring events that occur in the environment
36 36
37 # 1. Introduction 37 # 1. Introduction
38 - events sensed by an embedded sensor system are recorded in the town management system (TMS) 38 - events sensed by an embedded sensor system are recorded in the town management system (TMS)
39 - and appropriate information about the surroundings and instructions for proper services are provided to each robot 39 - and appropriate information about the surroundings and instructions for proper services are provided to each robot
40 40
41 # 1. Introduction 41 # 1. Introduction
42 - we also have been developing an informationally structured platform (fig.1. In which distributed sensors (fig.2a) and actuators are installed to support an indoor service robot (fig.2b) 42 - we also have been developing an informationally structured platform in which distributed sensors and actuators are installed to support an indoor service robot
43 - objects embedded sensors and rfid tags, and all of the data are stored in the TMS database 43 - objects embedded sensors and rfid tags, and all of the data are stored in the TMS database
44 - a service robot performs various service tasks according to the environmental data stored in the TMS database in collaboration with distributed sensors and actuators, for example, installed in a refrigerator to open a door. 44 - a service robot performs various service tasks according to the environmental data stored in the TMS database in collaboration with distributed sensors and actuators, for example, installed in a refrigerator to open a door
45 45
46 # 1. Introduction 46 # 1. Introduction
47 - we herein introduce a new town management system called the ROS-TMS. 47 - we herein introduce a new town management system called the ROS-TMS
48 - in this system, the robot operating system (ROS) is adopted as a communication framework between various modules, including distributed sensors, actuators, robots, and databases 48 - in this system, the robot operating system (ROS) is adopted as a communication framework between various modules, including distributed sensors, actuators, robots, and databases
49 49
50 # 1. Introduction 50 # 1. Introduction
51 - thanks to the ROS, we were able to develop a highly flexible and scalable system 51 - thanks to the ROS, we were able to develop a highly flexible and scalable system
52 - adding or removing modules such as sensors, actuators, and robots, to or from the system is simple and straightforward 52 - adding or removing modules such as sensors, actuators, and robots, to or from the system is simple and straightforward
53 - parallelization is also easily achievable. 53 - parallelization is also easily achievable
54 54
55 # 1. Introduction 55 # 1. Introduction
56 - we herein report the followings 56 - we herein report the followings
57 - introduction of architecture and components of the ROS-TMS 57 - introduction of architecture and components of the ROS-TMS
58 - object detection using a sensing system of the ROS-TMS 58 - object detection using a sensing system of the ROS-TMS
59 - fetch-and-give task using the motion planning system of the ROS-TMS. 59 - fetch-and-give task using the motion planning system of the ROS-TMS
60 60
61 # 1. Introduction 61 # 1. Introduction
62 - the remainder of the present paper is organized as follows. 62 - the remainder of the present paper is organized as follows
63 - section 2 : presenting related research 63 - section 2 : presenting related research
64 - section 3: we introduce the architecture and components of the ROS-TMS 64 - section 3: we introduce the architecture and components of the ROS-TMS
65 - section 4: we describe the sensing system of the ROS-TMS for processing the data acquired from various sensors 65 - section 4: we describe the sensing system of the ROS-TMS for processing the data acquired from various sensors
66 - section 5: describes the robot motion planning system of the ROS-TMS used to design the trajectories for moving, gasping, giving, and avoiding obstacles using the information on the environment acquired by the sensing system 66 - section 5: describes the robot motion planning system of the ROS-TMS used to design the trajectories for moving, gasping, giving, and avoiding obstacles using the information on the environment acquired by the sensing system
67 - section 6: we present the experimental results for service tasks performed by a humanoid robot and the ROS-TMS 67 - section 6: we present the experimental results for service tasks performed by a humanoid robot and the ROS-TMS
68 - section 7: concludes the paper. 68 - section 7: concludes the paper
69 69
70 # 2. Related research 70 # 2. Related research
71 - a considerable number of studies have been performed in the area of informationally structured environments/spaces to provide human-centric intelligent services 71 - a considerable number of studies have been performed in the area of informationally structured environments/spaces to provide human-centric intelligent services
72 - informationally structured environments are referred to variously as home automation systems, smart homes, ubiquitous robotics, kukanchi, and intelligent spaces, depending on the field of research and the professional experience of the researcher 72 - informationally structured environments are referred to variously as home automation systems, smart homes, ubiquitous robotics, kukanchi, and intelligent spaces, depending on the field of research and the professional experience of the researcher
73 73
94 94
95 # 2. Related research 95 # 2. Related research
96 - Embots can evaluate the current state of the environment using sensors, and convey that information to users 96 - Embots can evaluate the current state of the environment using sensors, and convey that information to users
97 - Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms 97 - Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms
98 - Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans 98 - Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans
99 - The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space [32,33]. 99 - The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space
100 100
101 # 2. Related research 101 # 2. Related research
102 - RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment 102 - RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment
103 - the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots 103 - the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots
104 104
105 # 2. Related research 105 # 2. Related research
106 - the informationally structured environment/space (also referred to as Kukanchi, a Japanese word meaning interactive human-space design and intelligence) has received a great deal of attention in robotics research as an alternative approach to the realization of a system of intelligent robots operating in our daily environment. 106 - the informationally structured environment/space (also referred to as Kukanchi, a Japanese word meaning interactive human-space design and intelligence) has received a great deal of attention in robotics research as an alternative approach to the realization of a system of intelligent robots operating in our daily environment
107 - human-centered systems require, in particular, sophisticated physical and information services, which are based on sensor networks, ubiquitous computing, and intelligent artifacts. 107 - human-centered systems require, in particular, sophisticated physical and information services, which are based on sensor networks, ubiquitous computing, and intelligent artifacts
108 - information resources and accessibility within an environment are essential for people and robots. 108 - information resources and accessibility within an environment are essential for people and robots
109 - the environment surrounding people and robots should have a structured platform for gathering, storing, transforming, and providing information. 109 - the environment surrounding people and robots should have a structured platform for gathering, storing, transforming, and providing information
110 - such an environment is referred to as an informationally structured space 110 - such an environment is referred to as an informationally structured space
111 111
112 # 2. Related research 112 # 2. Related research
113 - in section 5, we present a coordinate motion planning technique for a fetch-and-give including handing over an object to a person 113 - in section 5, we present a coordinate motion planning technique for a fetch-and-give including handing over an object to a person
114 - the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI) 114 - the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI)
124 - the problem of pushing carts using robots has been reported in many studies so far 124 - the problem of pushing carts using robots has been reported in many studies so far
125 - the earlier studies in pushing a cart were reported using a single manipulator mounted on a mobile base 125 - the earlier studies in pushing a cart were reported using a single manipulator mounted on a mobile base
126 126
127 # 2. Related research 127 # 2. Related research
128 - the problem of towing a trailer has also been discussed as an application of a mobile manipulator and a cart 128 - the problem of towing a trailer has also been discussed as an application of a mobile manipulator and a cart
129 - this work is close to the approach in this paper, however, a pivot point using a cart is placed in front of the robot in our technique. 129 - this work is close to the approach in this paper, however, a pivot point using a cart is placed in front of the robot in our technique
130 130
131 # 2. Related research 131 # 2. Related research
132 - the work that is closest to ours is the one by Scholz et al. 132 - the work that is closest to ours is the one by Scholz et al
133 - they provided a solution for real time navigation in a cluttered indoor environment using 3D sensing 133 - they provided a solution for real time navigation in a cluttered indoor environment using 3D sensing
134 134
135 # 2. Related research 135 # 2. Related research
136 - many previous works focus on the navigation and control problems for movable objects. 136 - many previous works focus on the navigation and control problems for movable objects
137 - On the other hand, we consider the problem including handing over an object to a human using a wagon, and propose a total motion planning technique for a fetch-and-give task with a wagon 137 - On the other hand, we consider the problem including handing over an object to a human using a wagon, and propose a total motion planning technique for a fetch-and-give task with a wagon
138 138
139 # 3. Overview of the ROS-TMS 139 # 3. Overview of the ROS-TMS
140 - in the present paper, we extend the TMS and develop a new Town Management System called the ROS-TMS 140 - in the present paper, we extend the TMS and develop a new Town Management System called the ROS-TMS
141 - This system has three primary components 141 - This system has three primary components
146 <div style="text-align: center;"> 146 <div style="text-align: center;">
147 <img src="./images/fig3.svg" alt="message" width="600"> 147 <img src="./images/fig3.svg" alt="message" width="600">
148 </div> 148 </div>
149 149
150 # 3. Overview of the ROS-TMS 150 # 3. Overview of the ROS-TMS
151 - events occurring in the real world, such as user behavior or user requests, and the current situation of the real world are sensed by a distributed sensing system. 151 - events occurring in the real world, such as user behavior or user requests, and the current situation of the real world are sensed by a distributed sensing system
152 - the gathered information is then stored in the database 152 - the gathered information is then stored in the database
153 153
154 <div style="text-align: center;"> 154 <div style="text-align: center;">
155 <img src="./images/fig3.svg" alt="message" width="600"> 155 <img src="./images/fig3.svg" alt="message" width="600">
156 </div> 156 </div>
248 <img src="./images/fig7.svg" alt="message" width="1200"> 248 <img src="./images/fig7.svg" alt="message" width="1200">
249 </div> 249 </div>
250 250
251 # 4.3. Object detection system (ODS) 251 # 4.3. Object detection system (ODS)
252 - available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform 252 - available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform
253 - in this system, a newly appeared object or movement of an object is detected as a change in the environment. 253 - in this system, a newly appeared object or movement of an object is detected as a change in the environment
254 254
255 <div style="text-align: center;"> 255 <div style="text-align: center;">
256 <img src="./images/fig8.svg" alt="message" width="600"> 256 <img src="./images/fig8.svg" alt="message" width="600">
257 </div> 257 </div>
258 258
259 # 4.3. Object detection system (ODS) 259 # 4.3. Object detection system (ODS)
260 - the steps of the change detection process are as follows. 260 - the steps of the change detection process are as follows
261 1. Identification of furniture 261 1. Identification of furniture
262 2. Alignment of the furniture model 262 2. Alignment of the furniture model
263 3. Object extraction by furniture removal 263 3. Object extraction by furniture removal
264 4. Segmentation of objects 264 4. Segmentation of objects
265 5. Comparison with the stored information 265 5. Comparison with the stored information
266 266
267 # 4.3.1. Identification of furniture 267 # 4.3.1. Identification of furniture
268 - possible to identify furniture based on the position and posture of robots and furniture in the database 268 - possible to identify furniture based on the position and posture of robots and furniture in the database
269 - using this information, robot cameras determine the range of the surrounding environment that is actually being measured. 269 - using this information, robot cameras determine the range of the surrounding environment that is actually being measured
270 - the system superimposes these results and the position information for furniture to create an updated furniture location model 270 - the system superimposes these results and the position information for furniture to create an updated furniture location model
271 271
272 # 4.3.1. Identification of furniture 272 # 4.3.1. Identification of furniture
273 - the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b) 273 - the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b)
274 - After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps 274 - After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps
276 <div style="text-align: center;"> 276 <div style="text-align: center;">
277 <img src="./images/fig9.svg" alt="message" width="800"> 277 <img src="./images/fig9.svg" alt="message" width="800">
278 </div> 278 </div>
279 279
280 # 4.3.2. Alignment of the furniture model 280 # 4.3.2. Alignment of the furniture model
281 - We scan twice for gathering point cloud datasets of previous and current scene. 281 - We scan twice for gathering point cloud datasets of previous and current scene
282 - in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints 282 - in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints
283 283
284 # 4.3.2. Alignment of the furniture model 284 # 4.3.2. Alignment of the furniture model
285 - in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture 285 - in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture
286 - the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory 286 - the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory
291 - after alignment, all points corresponding to furniture are removed to extract an object 291 - after alignment, all points corresponding to furniture are removed to extract an object
292 - the system removes furniture according to segmentation using color information and three-dimensional positions 292 - the system removes furniture according to segmentation using color information and three-dimensional positions
293 - more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method 293 - more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method
294 294
295 # 4.3.3. Object extraction by furniture removal 295 # 4.3.3. Object extraction by furniture removal
296 - each of the resulting segments is segmented based on the XYZ space. 296 - each of the resulting segments is segmented based on the XYZ space
297 - system then selects only those segments that overlap with the model and then removes these segments. 297 - system then selects only those segments that overlap with the model and then removes these segments
298 298
299 <div style="text-align: center;"> 299 <div style="text-align: center;">
300 <img src="./images/fig10.svg" alt="message" width="800"> 300 <img src="./images/fig10.svg" alt="message" width="800">
301 </div> 301 </div>
302 302
307 307
308 # 4.3.5. Comparison with the stored infomation 308 # 4.3.5. Comparison with the stored infomation
309 - finally, the system associates each segment from the previously stored information with the newly acquired information 309 - finally, the system associates each segment from the previously stored information with the newly acquired information
310 - system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition 310 - system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition
311 - segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset 311 - segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset
312 - segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture. 312 - segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture
313 313
314 # 4.3.5. Comparison with the stored infomation 314 # 4.3.5. Comparison with the stored infomation
315 - set of segments that are included in the association process are determined according to the center position of segments. 315 - set of segments that are included in the association process are determined according to the center position of segments
316 - for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association. 316 - for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association
317 317
318 # 4.3.5. Comparison with the stored infomation 318 # 4.3.5. Comparison with the stored infomation
319 - use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object 319 - use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object
320 - reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair 320 - reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair
321 - elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid 321 - elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid
323 <div style="text-align: center;"> 323 <div style="text-align: center;">
324 <img src="./images/fig11.svg" alt="message" width="800"> 324 <img src="./images/fig11.svg" alt="message" width="800">
325 </div> 325 </div>
326 326
327 # 4.3.5. Comparison with the stored infomation 327 # 4.3.5. Comparison with the stored infomation
328 - comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map. 328 - comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map
329 - the color information used to analyze the correlation between segments is the hue (H) and saturation (S) 329 - the color information used to analyze the correlation between segments is the hue (H) and saturation (S)
330 - Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects 330 - Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects
331 331
332 <div style="text-align: center;"> 332 <div style="text-align: center;">
333 <img src="./images/fig11.svg" alt="message" width="800"> 333 <img src="./images/fig11.svg" alt="message" width="800">