comparison slide.html @ 1:f8ef341d5822

Update
author Tatsuki IHA <e125716@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 03:36:41 +0900
parents 83569495824e
children 44a72b1ed986
comparison
equal deleted inserted replaced
0:83569495824e 1:f8ef341d5822
85 85
86 <div class='slide '> 86 <div class='slide '>
87 <!-- === begin markdown block === 87 <!-- === begin markdown block ===
88 88
89 generated by markdown/1.2.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15] 89 generated by markdown/1.2.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]
90 on 2016-06-03 01:31:33 +0900 with Markdown engine kramdown (1.11.1) 90 on 2016-06-03 03:35:15 +0900 with Markdown engine kramdown (1.11.1)
91 using options {} 91 using options {}
92 --> 92 -->
93 93
94 <!-- _S9SLIDE_ --> 94 <!-- _S9SLIDE_ -->
95 <h1 id="introduction">1. Introduction</h1> 95 <h1 id="introduction">1. Introduction</h1>
580 </ul> 580 </ul>
581 581
582 <div style="text-align: center;"> 582 <div style="text-align: center;">
583 <img src="./images/fig6.svg" alt="message" width="600" /> 583 <img src="./images/fig6.svg" alt="message" width="600" />
584 </div> 584 </div>
585
586
587 </div>
588 <div class='slide '>
589 <!-- _S9SLIDE_ -->
590 <h1 id="intelligent-cabinet-system-ics">4.2. Intelligent cabinet system (ICS)</h1>
591 <ul>
592 <li>the cabinets installed in the room are equipped with RFID readers and load cells to detect the types and positions of the objects in the cabinet</li>
593 <li>every object in the environment has an RFID tag containing a unique ID that identifies it</li>
594 <li>this ID is used to retrieve the attributes of the object, such as its name and location in the database</li>
595 </ul>
596
597
598 </div>
599 <div class='slide '>
600 <!-- _S9SLIDE_ -->
601 <h1 id="intelligent-cabinet-system-ics-1">4.2. Intelligent cabinet system (ICS)</h1>
602 <ul>
603 <li>using the RFID readers, we can detect the presence of a new object inside the cabinet</li>
604 <li>the load cell information allows us to determine its exact position inside the cabinet</li>
605 </ul>
606
607 <div style="text-align: center;">
608 <img src="./images/fig7.svg" alt="message" width="1200" />
609 </div>
610
611
612 </div>
613 <div class='slide '>
614 <!-- _S9SLIDE_ -->
615 <h1 id="object-detection-system-ods">4.3. Object detection system (ODS)</h1>
616 <ul>
617 <li>available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform</li>
618 <li>in this system, a newly appeared object or movement of an object is detected as a change in the environment.</li>
619 </ul>
620
621 <div style="text-align: center;">
622 <img src="./images/fig8.svg" alt="message" width="600" />
623 </div>
624
625
626 </div>
627 <div class='slide '>
628 <!-- _S9SLIDE_ -->
629 <h1 id="object-detection-system-ods-1">4.3. Object detection system (ODS)</h1>
630 <ul>
631 <li>the steps of the change detection process are as follows.
632 <ol>
633 <li>Identification of furniture</li>
634 <li>Alignment of the furniture model</li>
635 <li>Object extraction by furniture removal</li>
636 <li>Segmentation of objects</li>
637 <li>Comparison with the stored information</li>
638 </ol>
639 </li>
640 </ul>
641
642
643 </div>
644 <div class='slide '>
645 <!-- _S9SLIDE_ -->
646 <h1 id="identification-of-furniture">4.3.1. Identification of furniture</h1>
647 <ul>
648 <li>possible to identify furniture based on the position and posture of robots and furniture in the database</li>
649 <li>using this information, robot cameras determine the range of the surrounding environment that is actually being measured.</li>
650 <li>the system superimposes these results and the position information for furniture to create an updated furniture location model</li>
651 </ul>
652
653
654 </div>
655 <div class='slide '>
656 <!-- _S9SLIDE_ -->
657 <h1 id="identification-of-furniture-1">4.3.1. Identification of furniture</h1>
658 <ul>
659 <li>the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b)</li>
660 <li>After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps</li>
661 </ul>
662
663 <div style="text-align: center;">
664 <img src="./images/fig9.svg" alt="message" width="800" />
665 </div>
666
667
668 </div>
669 <div class='slide '>
670 <!-- _S9SLIDE_ -->
671 <h1 id="alignment-of-the-furniture-model">4.3.2. Alignment of the furniture model</h1>
672 <ul>
673 <li>We scan twice for gathering point cloud datasets of previous and current scene.</li>
674 <li>in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints</li>
675 </ul>
676
677
678 </div>
679 <div class='slide '>
680 <!-- _S9SLIDE_ -->
681 <h1 id="alignment-of-the-furniture-model-1">4.3.2. Alignment of the furniture model</h1>
682 <ul>
683 <li>in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture</li>
684 <li>the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory</li>
685 <li>using the aligned point cloud model, it is possible to use the point cloud data for objects located on the furniture, without having to use the point cloud data for furniture from the stored data</li>
686 <li>alignment of the furniture model is performed using the ICP algorithm</li>
687 </ul>
688
689
690 </div>
691 <div class='slide '>
692 <!-- _S9SLIDE_ -->
693 <h1 id="object-extraction-by-furniture-removal">4.3.3. Object extraction by furniture removal</h1>
694 <ul>
695 <li>after alignment, all points corresponding to furniture are removed to extract an object</li>
696 <li>the system removes furniture according to segmentation using color information and three-dimensional positions</li>
697 <li>more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method</li>
698 </ul>
699
700
701 </div>
702 <div class='slide '>
703 <!-- _S9SLIDE_ -->
704 <h1 id="object-extraction-by-furniture-removal-1">4.3.3. Object extraction by furniture removal</h1>
705 <ul>
706 <li>each of the resulting segments is segmented based on the XYZ space.</li>
707 <li>system then selects only those segments that overlap with the model and then removes these segments.</li>
708 </ul>
709
710 <div style="text-align: center;">
711 <img src="./images/fig10.svg" alt="message" width="800" />
712 </div>
713
714
715 </div>
716 <div class='slide '>
717 <!-- _S9SLIDE_ -->
718 <h1 id="segmentation-of-objects">4.3.4. Segmentation of objects</h1>
719 <ul>
720 <li>after performing the until now processing, only the points associated with objects placed on furniture remain</li>
721 <li>these points are further segmented based on XYZ space</li>
722 <li>the resulting segments are stored in the database</li>
723 </ul>
724
725
726 </div>
727 <div class='slide '>
728 <!-- _S9SLIDE_ -->
729 <h1 id="comparison-with-the-stored-infomation">4.3.5. Comparison with the stored infomation</h1>
730 <ul>
731 <li>finally, the system associates each segment from the previously stored information with the newly acquired information</li>
732 <li>system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition</li>
733 <li>segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset</li>
734 <li>segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture.</li>
735 </ul>
736
737
738 </div>
739 <div class='slide '>
740 <!-- _S9SLIDE_ -->
741 <h1 id="comparison-with-the-stored-infomation-1">4.3.5. Comparison with the stored infomation</h1>
742 <ul>
743 <li>set of segments that are included in the association process are determined according to the center position of segments.</li>
744 <li>for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association.</li>
745 </ul>
746
747
748 </div>
749 <div class='slide '>
750 <!-- _S9SLIDE_ -->
751 <h1 id="comparison-with-the-stored-infomation-2">4.3.5. Comparison with the stored infomation</h1>
752 <ul>
753 <li>use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object</li>
754 <li>reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair</li>
755 <li>elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid</li>
756 </ul>
757
758 <div style="text-align: center;">
759 <img src="./images/fig11.svg" alt="message" width="800" />
760 </div>
761
762
763 </div>
764 <div class='slide '>
765 <!-- _S9SLIDE_ -->
766 <h1 id="comparison-with-the-stored-infomation-3">4.3.5. Comparison with the stored infomation</h1>
767 <ul>
768 <li>comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map.</li>
769 <li>the color information used to analyze the correlation between segments is the hue (H) and saturation (S)</li>
770 <li>Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects</li>
771 </ul>
772
773 <div style="text-align: center;">
774 <img src="./images/fig11.svg" alt="message" width="800" />
775 </div>
776
777
778 </div>
779 <div class='slide '>
780 <!-- _S9SLIDE_ -->
781 <h1 id="comparison-with-the-stored-infomation-4">4.3.5. Comparison with the stored infomation</h1>
782 <ul>
783 <li>bhattacharyya distance BC(p, q) within H-S histograms(p,q) is used for determining the similarity between histograms and is calculated according to Eq. (1)</li>
784 <li>once distance values are calculated, the object can be assumed to be the same as for the case in which the degree of similarity is equal to or greater than the threshold value</li>
785 </ul>
786
787 <div style="text-align: center;">
788 <img src="./images/eq1.svg" alt="message" width="800" />
789 </div>
790
791
792 </div>
793 <div class='slide '>
794 <!-- _S9SLIDE_ -->
795 <h1 id="comparison-with-the-stored-infomation-5">4.3.5. Comparison with the stored infomation</h1>
796
797 <div style="text-align: center;">
798 <img src="./images/fig12.svg" alt="message" width="800" />
799 </div>
585 <!-- === end markdown block === --> 800 <!-- === end markdown block === -->
586 </div> 801 </div>
587 802
588 803
589 </div><!-- presentation --> 804 </div><!-- presentation -->