diff slide.html @ 1:f8ef341d5822

Update
author Tatsuki IHA <e125716@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 03:36:41 +0900
parents 83569495824e
children 44a72b1ed986
line wrap: on
line diff
--- a/slide.html	Fri Jun 03 01:34:45 2016 +0900
+++ b/slide.html	Fri Jun 03 03:36:41 2016 +0900
@@ -87,7 +87,7 @@
 <!-- === begin markdown block ===
 
       generated by markdown/1.2.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]
-                on 2016-06-03 01:31:33 +0900 with Markdown engine kramdown (1.11.1)
+                on 2016-06-03 03:35:15 +0900 with Markdown engine kramdown (1.11.1)
                   using options {}
   -->
 
@@ -582,6 +582,221 @@
 <div style="text-align: center;">
     <img src="./images/fig6.svg" alt="message" width="600" />
 </div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="intelligent-cabinet-system-ics">4.2. Intelligent cabinet system (ICS)</h1>
+<ul>
+  <li>the cabinets installed in the room are equipped with RFID readers and load cells to detect the types and positions of the objects in the cabinet</li>
+  <li>every object in the environment has an RFID tag containing a unique ID that identifies it</li>
+  <li>this ID is used to retrieve the attributes of the object, such as its name and location in the database</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="intelligent-cabinet-system-ics-1">4.2. Intelligent cabinet system (ICS)</h1>
+<ul>
+  <li>using the RFID readers, we can detect the presence of a new object inside the cabinet</li>
+  <li>the load cell information allows us to determine its exact position inside the cabinet</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/fig7.svg" alt="message" width="1200" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="object-detection-system-ods">4.3. Object detection system (ODS)</h1>
+<ul>
+  <li>available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform</li>
+  <li>in this system, a newly appeared object or movement of an object is detected as a change in the environment.</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/fig8.svg" alt="message" width="600" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="object-detection-system-ods-1">4.3. Object detection system (ODS)</h1>
+<ul>
+  <li>the steps of the change detection process are as follows.
+    <ol>
+      <li>Identification of furniture</li>
+      <li>Alignment of the furniture model</li>
+      <li>Object extraction by furniture removal</li>
+      <li>Segmentation of objects</li>
+      <li>Comparison with the stored information</li>
+    </ol>
+  </li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="identification-of-furniture">4.3.1. Identification of furniture</h1>
+<ul>
+  <li>possible to identify furniture based on the position and posture of robots and furniture in the database</li>
+  <li>using this information, robot cameras determine the range of the surrounding environment that is actually being measured.</li>
+  <li>the system superimposes these results and the position information for furniture to create an updated furniture location model</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="identification-of-furniture-1">4.3.1. Identification of furniture</h1>
+<ul>
+  <li>the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b)</li>
+  <li>After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/fig9.svg" alt="message" width="800" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="alignment-of-the-furniture-model">4.3.2. Alignment of the furniture model</h1>
+<ul>
+  <li>We scan twice for gathering point cloud datasets of previous and current scene.</li>
+  <li>in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="alignment-of-the-furniture-model-1">4.3.2. Alignment of the furniture model</h1>
+<ul>
+  <li>in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture</li>
+  <li>the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory</li>
+  <li>using the aligned point cloud model, it is possible to use the point cloud data for objects located on the furniture, without having to use the point cloud data for furniture from the stored data</li>
+  <li>alignment of the furniture model is performed using the ICP algorithm</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="object-extraction-by-furniture-removal">4.3.3. Object extraction by furniture removal</h1>
+<ul>
+  <li>after alignment, all points corresponding to furniture are removed to extract an object</li>
+  <li>the system removes furniture according to segmentation using color information and three-dimensional positions</li>
+  <li>more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="object-extraction-by-furniture-removal-1">4.3.3. Object extraction by furniture removal</h1>
+<ul>
+  <li>each of the resulting segments is segmented based on the XYZ space.</li>
+  <li>system then selects only those segments that overlap with the model and then removes these segments.</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/fig10.svg" alt="message" width="800" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="segmentation-of-objects">4.3.4. Segmentation of objects</h1>
+<ul>
+  <li>after performing the until now processing, only the points associated with objects placed on furniture remain</li>
+  <li>these points are further segmented based on XYZ space</li>
+  <li>the resulting segments are stored in the database</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="comparison-with-the-stored-infomation">4.3.5. Comparison with the stored infomation</h1>
+<ul>
+  <li>finally, the system associates each segment from the previously stored information with the newly acquired information</li>
+  <li>system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition</li>
+  <li>segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset</li>
+  <li>segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="comparison-with-the-stored-infomation-1">4.3.5. Comparison with the stored infomation</h1>
+<ul>
+  <li>set of segments that are included in the association process are determined according to the center position of segments.</li>
+  <li>for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="comparison-with-the-stored-infomation-2">4.3.5. Comparison with the stored infomation</h1>
+<ul>
+  <li>use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object</li>
+  <li>reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair</li>
+  <li>elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/fig11.svg" alt="message" width="800" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="comparison-with-the-stored-infomation-3">4.3.5. Comparison with the stored infomation</h1>
+<ul>
+  <li>comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map.</li>
+  <li>the color information used to analyze the correlation between segments is the hue (H) and saturation (S)</li>
+  <li>Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/fig11.svg" alt="message" width="800" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="comparison-with-the-stored-infomation-4">4.3.5. Comparison with the stored infomation</h1>
+<ul>
+  <li>bhattacharyya distance BC(p, q) within H-S histograms(p,q) is used for determining the similarity between histograms and is calculated according to Eq. (1)</li>
+  <li>once distance values are calculated, the object can be assumed to be the same as for the case in which the degree of similarity is equal to or greater than the threshold value</li>
+</ul>
+
+<div style="text-align: center;">
+    <img src="./images/eq1.svg" alt="message" width="800" />
+</div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="comparison-with-the-stored-infomation-5">4.3.5. Comparison with the stored infomation</h1>
+
+<div style="text-align: center;">
+    <img src="./images/fig12.svg" alt="message" width="800" />
+</div>
 <!-- === end markdown block === -->
 </div>