view slide.html @ 1:f8ef341d5822

Update
author Tatsuki IHA <e125716@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 03:36:41 +0900
parents 83569495824e
children 44a72b1ed986
line wrap: on
line source

<!DOCTYPE html>
<html>
<head>
   <meta http-equiv="content-type" content="text/html;charset=utf-8">
   <title>Service robot system with an informationally structured environment</title>

<meta name="generator" content="Slide Show (S9) v2.5.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]">
<meta name="author"    content="Tatsuki IHA, Nozomi TERUYA" >

<!-- style sheet links -->
<link rel="stylesheet" href="s6/themes/projection.css"   media="screen,projection">
<link rel="stylesheet" href="s6/themes/screen.css"       media="screen">
<link rel="stylesheet" href="s6/themes/print.css"        media="print">
<link rel="stylesheet" href="s6/themes/blank.css"        media="screen,projection">

<!-- JS -->
<script src="s6/js/jquery-1.11.3.min.js"></script>
<script src="s6/js/jquery.slideshow.js"></script>
<script src="s6/js/jquery.slideshow.counter.js"></script>
<script src="s6/js/jquery.slideshow.controls.js"></script>
<script src="s6/js/jquery.slideshow.footer.js"></script>
<script src="s6/js/jquery.slideshow.autoplay.js"></script>

<!-- prettify -->
<link rel="stylesheet" href="scripts/prettify.css">
<script src="scripts/prettify.js"></script>

<script>
  $(document).ready( function() {
    Slideshow.init();

    $('code').each(function(_, el) {
      if (!el.classList.contains('noprettyprint')) {
        el.classList.add('prettyprint');
        el.style.display = 'block';
      }
    });
    prettyPrint();
  } );

  
</script>

<!-- Better Browser Banner for Microsoft Internet Explorer (IE) -->
<!--[if IE]>
<script src="s6/js/jquery.microsoft.js"></script>
<![endif]-->



</head>
<body>

<div class="layout">
  <div id="header"></div>
  <div id="footer">
    <div align="right">
      <img src="s6/images/logo.svg" width="200px">
    </div>
  </div>
</div>

<div class="presentation">

  <div class='slide cover'>
    <table width="90%" height="90%" border="0" align="center">
      <tr>
        <td>
          <div align="center">
            <h1><font color="#808db5">Service robot system with an informationally structured environment</font></h1>
          </div>
        </td>
      </tr>
      <tr>
        <td>
          <div align="left">
            Tatsuki IHA, Nozomi TERUYA
            Kono lab
            <hr style="color:#ffcc00;background-color:#ffcc00;text-align:left;border:none;width:100%;height:0.2em;">
          </div>
        </td>
      </tr>
    </table>
  </div>

<div class='slide '>
<!-- === begin markdown block ===

      generated by markdown/1.2.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]
                on 2016-06-03 03:35:15 +0900 with Markdown engine kramdown (1.11.1)
                  using options {}
  -->

<!-- _S9SLIDE_ -->
<h1 id="introduction">1. Introduction</h1>
<ul>
  <li>aging of the population is a common problem in modern societies, and rapidly aging populations and declining birth rates have become more serious in recent years</li>
  <li>for instance, the manpower shortage in hospitals and elderly care facilities has led to the deterioration of quality of life for elderly individuals</li>
  <li>robot technology is expected to play an important role in the development of a healthy and sustainable society</li>
  <li>in particular, daily life assistance for elderly individuals in hospitals and care facilities is one of the most urgent and promising applications for service robots</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-1">1. Introduction</h1>
<ul>
  <li>for a service robot, information about its surrounding, such as the positions of objects, furniture, humans, and other robots is indispensable for safely performing proper service tasks</li>
  <li>however, current sensing technology, especially for cases of robots equipped with external sensors, is not good enough to complete these tasks satisfactorily</li>
  <li>for example, a vision system is susceptible to changes in lighting conditions and the appearances of objects. moreover, the field of vision is rather narrow.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-2">1. Introduction</h1>
<ul>
  <li>although occlusions can be partly solved by sensors on a mobile robot, background changes and unfavorable vibrations of a robot body make processes more difficult.</li>
  <li>in addition, the payload of a robot is not so high and computer resources are also limited.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-3">1. Introduction</h1>
<ul>
  <li>fixed sensors in an environment are more stable and can more easily gather information about the environment.</li>
  <li>if a sufficient number of sensors can be embedded in the environment in advance, occlusion is no longer a crucial problem.</li>
  <li>information required to perform tasks is acquired by distributed sensors and transmitted to a robot on demand.</li>
  <li>the concept of making an environment smarter rather than the robot is referred to as an informationally structured environment.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-4">1. Introduction</h1>
<ul>
  <li>an informationally structured environment is a feasible solution for introducing service robots into our daily lives using current technology</li>
  <li>several systems that observe human behavior using distributed sensor systems and provide proper service tasks according to requests from human or emergency detection, which is triggered automatically, have been proposed</li>
  <li>several service robots that act as companions to elderly people or as assistants to humans who require special care have been developed</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-5">1. Introduction</h1>
<ul>
  <li>we also have been developing an informationally structured environment for assisting in the daily life of elderly people in our research project, i.e., the robot town project</li>
  <li>the goal of this project is to develop a distributed sensor network system covering a townsize environment consisting of several houses, buildings, and roads, and to manage robot services appropriately by monitoring events that occur in the environment.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-6">1. Introduction</h1>
<ul>
  <li>events sensed by an embedded sensor system are recorded in the town management system (TMS)</li>
  <li>and appropriate information about the surroundings and instructions for proper services are provided to each robot</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-7">1. Introduction</h1>
<ul>
  <li>we also have been developing an informationally structured platform (fig.1. In which distributed sensors (fig.2a) and actuators are installed to support an indoor service robot (fig.2b)</li>
  <li>objects embedded sensors and rfid tags, and all of the data are stored in the TMS database</li>
  <li>a service robot performs various service tasks according to the environmental data stored in the TMS database in collaboration with distributed sensors and actuators, for example, installed in a refrigerator to open a door.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-8">1. Introduction</h1>
<ul>
  <li>we herein introduce a new town management system called the ROS-TMS.</li>
  <li>in this system, the robot operating system (ROS) is adopted as a communication framework between various modules, including distributed sensors, actuators, robots, and databases</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-9">1. Introduction</h1>
<ul>
  <li>thanks to the ROS, we were able to develop a highly flexible and scalable system</li>
  <li>adding or removing modules such as sensors, actuators, and robots, to or from the system is simple and straightforward</li>
  <li>parallelization is also easily achievable.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-10">1. Introduction</h1>
<ul>
  <li>we herein report the followings
    <ul>
      <li>introduction of architecture and components of the ROS-TMS</li>
      <li>object detection using a sensing system of the ROS-TMS</li>
      <li>fetch-and-give task using the motion planning system of the ROS-TMS.</li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-11">1. Introduction</h1>
<ul>
  <li>the remainder of the present paper is organized as follows.
    <ul>
      <li>section 2 : presenting related research</li>
      <li>section 3:  we introduce the architecture and components of the ROS-TMS</li>
      <li>section 4:  we describe the sensing system of the ROS-TMS for processing the data acquired from various sensors</li>
      <li>section 5:  describes the robot motion planning system of the ROS-TMS used to design the trajectories for moving, gasping, giving, and avoiding obstacles using the information on the environment acquired by the sensing system</li>
      <li>section 6:  we present the experimental results for service tasks performed by a humanoid robot and the ROS-TMS</li>
      <li>section 7:  concludes the paper.</li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research">2. Related research</h1>
<ul>
  <li>a considerable number of studies have been performed in the area of informationally structured environments/spaces to provide human-centric intelligent services</li>
  <li>informationally structured environments are referred to variously as home automation systems, smart homes, ubiquitous robotics, kukanchi, and intelligent spaces, depending on the field of research and the professional experience of the researcher</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-1">2. Related research</h1>
<ul>
  <li>home automation systems or smart homes are popular systems that centralize the control of lighting, heating, air conditioning, appliances, and doors, for example, to provide convenience, comfort, and energy savings</li>
  <li>the informationally structured environment can be categorized in this system, but the system is designed to support not only human life but also robot activity for service tasks</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-2">2. Related research</h1>
<ul>
  <li>hashimoto and Lee proposed an intelligent space in 1996</li>
  <li>intelligent spaces (iSpace) are rooms or areas that are equipped with intelligent devices, which enable spaces to perceive and understand what is occurring within them</li>
  <li>these intelligent devices have sensing, processing, and networking functions and are referred to as distributed intelligent networked devices (DINDs)</li>
  <li>one DIND consists of a CCD camera to acquire spatial information and a processing computer, which performs data processing and network interfacing</li>
  <li>these devices observe the position and behavior of both human beings and robots coexisting in the iSpace</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-3">2. Related research</h1>
<ul>
  <li>the concept of a physically embedded intelligent system (PEIS) has been introduced in 2005</li>
  <li>PEIS involves the intersection and integration of three research areas: artificial intelligence, robotics, and ubiquitous computing</li>
  <li>anything that consists of software components with a physical embodiment and interacts with the environment through sensors or actuators/robots is considered to be a PEIS, and a set of interconnected physically embedded intelligent systems is defined as a PEIS ecology</li>
  <li>tasks can be achieved using either centralized or distributed approaches using the PEIS ecology</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-4">2. Related research</h1>
<ul>
  <li>Ubiquitous robotics involves the design and deployment of robots in smart network environments in which everything is interconnected</li>
  <li>define three types of Ubibots: software robots (Sobots), embedded robots (Embots), and mobile robots (Mobots), which can provide services using various devices through any network, at any place and at any time in a ubiquitous space (u-space)</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-5">2. Related research</h1>
<ul>
  <li>Embots can evaluate the current state of the environment using sensors, and convey that information to users</li>
  <li>Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms</li>
  <li>Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans</li>
  <li>The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space [32,33].</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-6">2. Related research</h1>
<ul>
  <li>RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment</li>
  <li>the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-7">2. Related research</h1>
<ul>
  <li>the informationally structured environment/space (also referred to as Kukanchi, a Japanese word meaning interactive human-space design and intelligence) has received a great deal of attention in robotics research as an alternative approach to the realization of a system of intelligent robots operating in our daily environment.</li>
  <li>human-centered systems require, in particular, sophisticated physical and information services, which are based on sensor networks, ubiquitous computing, and intelligent artifacts.</li>
  <li>information resources and accessibility within an environment are essential for people and robots.</li>
  <li>the environment surrounding people and robots should have a structured platform for gathering, storing, transforming, and providing information.</li>
  <li>such an environment is referred to as an informationally structured space</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-8">2. Related research</h1>
<ul>
  <li>in section 5, we present a coordinate motion planning technique for a fetch-and-give including handing over an object to a person</li>
  <li>the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI)</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-9">2. Related research</h1>
<ul>
  <li>the work that is closest to ours is the one by Dehais et al</li>
  <li>in their study, physiological and subjective evaluation for a handing over task was presented</li>
  <li>the performance of hand-over tasks were evaluated according to three criteria: legibility, safety and physical comfort</li>
  <li>these criteria are represented as fields of cost functions mapped around the human to generate ergonomic hand-over motions</li>
  <li>although their approach is similar to our approach, we consider the additional criteria, that is, the manipulability of both a robot and a human for a comfortable and safety fetch-and-give task</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-10">2. Related research</h1>
<ul>
  <li>the problem of pushing carts using robots has been reported in many studies so far</li>
  <li>the earlier studies in pushing a cart were reported using a single manipulator mounted on a mobile base</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-11">2. Related research</h1>
<ul>
  <li>the problem of towing a trailer has also been discussed as an application of a mobile manipulator and a cart</li>
  <li>this work is close to the approach in this paper, however, a pivot point using a cart is placed in front of the robot in our technique.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-12">2. Related research</h1>
<ul>
  <li>the work that is closest to ours is the one by Scholz et al.</li>
  <li>they provided a solution for real time navigation in a cluttered indoor environment using 3D sensing</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-13">2. Related research</h1>
<ul>
  <li>many previous works focus on the navigation and control problems for movable objects.</li>
  <li>On the other hand, we consider the problem including handing over an object to a human using a wagon, and propose a total motion planning technique for a fetch-and-give task with a wagon</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms">3. Overview of the ROS-TMS</h1>
<ul>
  <li>in the present paper, we extend the TMS and develop a new Town Management System called the ROS-TMS</li>
  <li>This system has three primary components
    <ul>
      <li>real-world</li>
      <li>database</li>
      <li>cyber-world</li>
    </ul>
  </li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig3.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-1">3. Overview of the ROS-TMS</h1>
<ul>
  <li>events occurring in the real world, such as user behavior or user requests, and the current situation of the real world are sensed by a distributed sensing system.</li>
  <li>the gathered information is then stored in the database</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig3.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-2">3. Overview of the ROS-TMS</h1>
<ul>
  <li>appropriate service commands are planned using the environmental information in the database and are simulated carefully in the cyber world using simulators, such as choreonoid</li>
  <li>service tasks are assigned to service robots in the real world</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig3.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-3">3. Overview of the ROS-TMS</h1>
<ul>
  <li>the following functions are implemented in the ROS-TMS
    <ol>
      <li>Communication with sensors, robots, and databases</li>
      <li>Storage,revision,backup,and retrieval of real-time information in an environment</li>
      <li>Maintenance and providing information according to individual IDs assigned to each object and robot</li>
      <li>Notification of the occurrence of particular predefined events, such as accidents</li>
      <li>Task schedule function for multiple robots and sensors</li>
      <li>Human-system interaction for user requests</li>
      <li>Real-time task planning for service robots</li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-4">3. Overview of the ROS-TMS</h1>
<ul>
  <li>ROS-TMS has unique features, as described below
    <ul>
      <li>Scalability
        <ul>
          <li>ROS-TMS is designed to have high scalability so that it can handle not only a single room but also a building and a town</li>
        </ul>
      </li>
      <li>Diversity
        <ul>
          <li>diversity: The ROS–TMS supports a variety of sensors and robots</li>
          <li>for instance, Vicon MX (Vicon Motion Systems Ltd.), TopUrg (Hokuyo Automatic), Velodyne 32e (Velodyne Lidar), and Oculus Rift (Oculus VR) are installed in the developed informationally structured platform</li>
        </ul>
      </li>
      <li>Safety
        <ul>
          <li>data gathered from the real world is used to perform simulations in the cyber world in order to evaluate the safety and efficiency of designed tasks</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-5">3. Overview of the ROS-TMS</h1>
<ul>
  <li>ROS-TMS has unique features, as described below
    <ul>
      <li>Privacy protection
        <ul>
          <li>one important restriction in our intelligent environment is to install a small number of sensors to avoid interfering with the daily activity of people and to reduce the invasion of their privacy as far as possible</li>
          <li>we do not install conventional cameras in the environment</li>
        </ul>
      </li>
      <li>Economy
        <ul>
          <li>sensors installed in an environment can be shared with robots and tasks, and thus we do not need to equip individual robots with numerous sensors</li>
          <li>in addition, most sensors are processed by low-cost single-board computers in the proposed system</li>
          <li>this concept has an advantage especially for the system consisting of multiple robots since robots can share the resources in the environment</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-6">3. Overview of the ROS-TMS</h1>
<ul>
  <li>some features such as modularity, scalability, and diversity owe much to ROS’s outstanding features</li>
  <li>on the other hand, economical or processing efficiency strongly depends on the unique features of ROS-TMS, since various information gathered by distributed sensor networks is structured and stored to the database and repeatedly utilized for planning various service tasks by robots or other systems</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-7">3. Overview of the ROS-TMS</h1>
<ul>
  <li>ROS-TMS is composed of five components
    <ul>
      <li>User</li>
      <li>Sensor</li>
      <li>Robot</li>
      <li>Task</li>
      <li>Data</li>
    </ul>
  </li>
  <li>components are also composed of sub-modules
    <ul>
      <li>such as the User Request sub-module for the user component</li>
    </ul>
  </li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig4.svg" alt="message" width="450" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="sensing-system">4. Sensing system</h1>
<ul>
  <li>sensing system (TMS_SS) is a component of the ROS-TMS that processes the data acquired from various environment sensors</li>
  <li>TMS_SS is composed of three sub-packages
    <ul>
      <li>Floor sensing system (FSS)</li>
      <li>Intelligent cabinet system (ICS)</li>
      <li>Object detection system (ODS)</li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="floor-sensing-systemfss">4.1 Floor sensing system(FSS)</h1>
<ul>
  <li>current platform is equipped with a floor sensing system to detect objects on the floor and people walking around</li>
  <li>this sensing systems is composed of a laser range finder located on one side of the room and a mirror installed along another side of the room</li>
  <li>this configuration allows a reduction of dead angles of the LRF and is more robust against occlusions</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig6.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="floor-sensing-systemfss-1">4.1 Floor sensing system(FSS)</h1>
<ul>
  <li>people tracking is performed by first applying static background subtraction and then extracting clusters in the remainder of the measurements</li>
  <li>this system can measure the poses of the robot and movable furniture such as a wagon using tags, which have encoded reflection patterns optically identified by the LRF</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig6.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="intelligent-cabinet-system-ics">4.2. Intelligent cabinet system (ICS)</h1>
<ul>
  <li>the cabinets installed in the room are equipped with RFID readers and load cells to detect the types and positions of the objects in the cabinet</li>
  <li>every object in the environment has an RFID tag containing a unique ID that identifies it</li>
  <li>this ID is used to retrieve the attributes of the object, such as its name and location in the database</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="intelligent-cabinet-system-ics-1">4.2. Intelligent cabinet system (ICS)</h1>
<ul>
  <li>using the RFID readers, we can detect the presence of a new object inside the cabinet</li>
  <li>the load cell information allows us to determine its exact position inside the cabinet</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig7.svg" alt="message" width="1200" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-detection-system-ods">4.3. Object detection system (ODS)</h1>
<ul>
  <li>available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform</li>
  <li>in this system, a newly appeared object or movement of an object is detected as a change in the environment.</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig8.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-detection-system-ods-1">4.3. Object detection system (ODS)</h1>
<ul>
  <li>the steps of the change detection process are as follows.
    <ol>
      <li>Identification of furniture</li>
      <li>Alignment of the furniture model</li>
      <li>Object extraction by furniture removal</li>
      <li>Segmentation of objects</li>
      <li>Comparison with the stored information</li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="identification-of-furniture">4.3.1. Identification of furniture</h1>
<ul>
  <li>possible to identify furniture based on the position and posture of robots and furniture in the database</li>
  <li>using this information, robot cameras determine the range of the surrounding environment that is actually being measured.</li>
  <li>the system superimposes these results and the position information for furniture to create an updated furniture location model</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="identification-of-furniture-1">4.3.1. Identification of furniture</h1>
<ul>
  <li>the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b)</li>
  <li>After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig9.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="alignment-of-the-furniture-model">4.3.2. Alignment of the furniture model</h1>
<ul>
  <li>We scan twice for gathering point cloud datasets of previous and current scene.</li>
  <li>in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="alignment-of-the-furniture-model-1">4.3.2. Alignment of the furniture model</h1>
<ul>
  <li>in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture</li>
  <li>the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory</li>
  <li>using the aligned point cloud model, it is possible to use the point cloud data for objects located on the furniture, without having to use the point cloud data for furniture from the stored data</li>
  <li>alignment of the furniture model is performed using the ICP algorithm</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-extraction-by-furniture-removal">4.3.3. Object extraction by furniture removal</h1>
<ul>
  <li>after alignment, all points corresponding to furniture are removed to extract an object</li>
  <li>the system removes furniture according to segmentation using color information and three-dimensional positions</li>
  <li>more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-extraction-by-furniture-removal-1">4.3.3. Object extraction by furniture removal</h1>
<ul>
  <li>each of the resulting segments is segmented based on the XYZ space.</li>
  <li>system then selects only those segments that overlap with the model and then removes these segments.</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig10.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="segmentation-of-objects">4.3.4. Segmentation of objects</h1>
<ul>
  <li>after performing the until now processing, only the points associated with objects placed on furniture remain</li>
  <li>these points are further segmented based on XYZ space</li>
  <li>the resulting segments are stored in the database</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>finally, the system associates each segment from the previously stored information with the newly acquired information</li>
  <li>system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition</li>
  <li>segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset</li>
  <li>segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-1">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>set of segments that are included in the association process are determined according to the center position of segments.</li>
  <li>for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-2">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object</li>
  <li>reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair</li>
  <li>elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig11.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-3">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map.</li>
  <li>the color information used to analyze the correlation between segments is the hue (H) and saturation (S)</li>
  <li>Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig11.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-4">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>bhattacharyya distance BC(p, q) within H-S histograms(p,q) is used for determining the similarity between histograms and is calculated according to Eq. (1)</li>
  <li>once distance values are calculated, the object can be assumed to be the same as for the case in which the degree of similarity is equal to or greater than the threshold value</li>
</ul>

<div style="text-align: center;">
    <img src="./images/eq1.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-5">4.3.5. Comparison with the stored infomation</h1>

<div style="text-align: center;">
    <img src="./images/fig12.svg" alt="message" width="800" />
</div>
<!-- === end markdown block === -->
</div>


</div><!-- presentation -->
</body>
</html>