comparison slide.md @ 1:f8ef341d5822

Update
author Tatsuki IHA <e125716@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 03:36:41 +0900
parents 83569495824e
children 44a72b1ed986
comparison
equal deleted inserted replaced
0:83569495824e 1:f8ef341d5822
235 <img src="./images/fig6.svg" alt="message" width="600"> 235 <img src="./images/fig6.svg" alt="message" width="600">
236 </div> 236 </div>
237 237
238 # 4.2. Intelligent cabinet system (ICS) 238 # 4.2. Intelligent cabinet system (ICS)
239 - the cabinets installed in the room are equipped with RFID readers and load cells to detect the types and positions of the objects in the cabinet 239 - the cabinets installed in the room are equipped with RFID readers and load cells to detect the types and positions of the objects in the cabinet
240 - every object in the environment has an RFID tag containing a unique ID that identifies it
241 - this ID is used to retrieve the attributes of the object, such as its name and location in the database
242
243 # 4.2. Intelligent cabinet system (ICS)
244 - using the RFID readers, we can detect the presence of a new object inside the cabinet
245 - the load cell information allows us to determine its exact position inside the cabinet
246
247 <div style="text-align: center;">
248 <img src="./images/fig7.svg" alt="message" width="1200">
249 </div>
250
251 # 4.3. Object detection system (ODS)
252 - available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform
253 - in this system, a newly appeared object or movement of an object is detected as a change in the environment.
254
255 <div style="text-align: center;">
256 <img src="./images/fig8.svg" alt="message" width="600">
257 </div>
258
259 # 4.3. Object detection system (ODS)
260 - the steps of the change detection process are as follows.
261 1. Identification of furniture
262 2. Alignment of the furniture model
263 3. Object extraction by furniture removal
264 4. Segmentation of objects
265 5. Comparison with the stored information
266
267 # 4.3.1. Identification of furniture
268 - possible to identify furniture based on the position and posture of robots and furniture in the database
269 - using this information, robot cameras determine the range of the surrounding environment that is actually being measured.
270 - the system superimposes these results and the position information for furniture to create an updated furniture location model
271
272 # 4.3.1. Identification of furniture
273 - the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b)
274 - After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps
275
276 <div style="text-align: center;">
277 <img src="./images/fig9.svg" alt="message" width="800">
278 </div>
279
280 # 4.3.2. Alignment of the furniture model
281 - We scan twice for gathering point cloud datasets of previous and current scene.
282 - in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints
283
284 # 4.3.2. Alignment of the furniture model
285 - in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture
286 - the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory
287 - using the aligned point cloud model, it is possible to use the point cloud data for objects located on the furniture, without having to use the point cloud data for furniture from the stored data
288 - alignment of the furniture model is performed using the ICP algorithm
289
290 # 4.3.3. Object extraction by furniture removal
291 - after alignment, all points corresponding to furniture are removed to extract an object
292 - the system removes furniture according to segmentation using color information and three-dimensional positions
293 - more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method
294
295 # 4.3.3. Object extraction by furniture removal
296 - each of the resulting segments is segmented based on the XYZ space.
297 - system then selects only those segments that overlap with the model and then removes these segments.
298
299 <div style="text-align: center;">
300 <img src="./images/fig10.svg" alt="message" width="800">
301 </div>
302
303 # 4.3.4. Segmentation of objects
304 - after performing the until now processing, only the points associated with objects placed on furniture remain
305 - these points are further segmented based on XYZ space
306 - the resulting segments are stored in the database
307
308 # 4.3.5. Comparison with the stored infomation
309 - finally, the system associates each segment from the previously stored information with the newly acquired information
310 - system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition
311 - segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset
312 - segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture.
313
314 # 4.3.5. Comparison with the stored infomation
315 - set of segments that are included in the association process are determined according to the center position of segments.
316 - for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association.
317
318 # 4.3.5. Comparison with the stored infomation
319 - use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object
320 - reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair
321 - elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid
322
323 <div style="text-align: center;">
324 <img src="./images/fig11.svg" alt="message" width="800">
325 </div>
326
327 # 4.3.5. Comparison with the stored infomation
328 - comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map.
329 - the color information used to analyze the correlation between segments is the hue (H) and saturation (S)
330 - Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects
331
332 <div style="text-align: center;">
333 <img src="./images/fig11.svg" alt="message" width="800">
334 </div>
335
336 # 4.3.5. Comparison with the stored infomation
337 - bhattacharyya distance BC(p, q) within H-S histograms(p,q) is used for determining the similarity between histograms and is calculated according to Eq. (1)
338 - once distance values are calculated, the object can be assumed to be the same as for the case in which the degree of similarity is equal to or greater than the threshold value
339
340 <div style="text-align: center;">
341 <img src="./images/eq1.svg" alt="message" width="800">
342 </div>
343
344
345 # 4.3.5. Comparison with the stored infomation
346
347 <div style="text-align: center;">
348 <img src="./images/fig12.svg" alt="message" width="800">
349 </div>