view slide.html @ 13:8ae14c56ea14 default tip

change markdown
author Nozomi Teruya <e125769@ie.u-ryukyu.ac.jp>
date Fri, 22 Jul 2016 11:43:42 +0900
parents 3bee23948f70
children
line wrap: on
line source

<!DOCTYPE html>
<html>
<head>
   <meta http-equiv="content-type" content="text/html;charset=utf-8">
   <title>Service robot system with an informationally structured environment</title>

<meta name="generator" content="Slide Show (S9) v2.5.0 on Ruby 2.1.0 (2013-12-25) [x86_64-darwin13.0]">
<meta name="author"    content="Tatsuki IHA, Nozomi TERUYA" >

<!-- style sheet links -->
<link rel="stylesheet" href="s6/themes/projection.css"   media="screen,projection">
<link rel="stylesheet" href="s6/themes/screen.css"       media="screen">
<link rel="stylesheet" href="s6/themes/print.css"        media="print">
<link rel="stylesheet" href="s6/themes/blank.css"        media="screen,projection">

<!-- JS -->
<script src="s6/js/jquery-1.11.3.min.js"></script>
<script src="s6/js/jquery.slideshow.js"></script>
<script src="s6/js/jquery.slideshow.counter.js"></script>
<script src="s6/js/jquery.slideshow.controls.js"></script>
<script src="s6/js/jquery.slideshow.footer.js"></script>
<script src="s6/js/jquery.slideshow.autoplay.js"></script>

<!-- prettify -->
<link rel="stylesheet" href="scripts/prettify.css">
<script src="scripts/prettify.js"></script>

<script>
  $(document).ready( function() {
    Slideshow.init();

    $('code').each(function(_, el) {
      if (!el.classList.contains('noprettyprint')) {
        el.classList.add('prettyprint');
        el.style.display = 'block';
      }
    });
    prettyPrint();
  } );

  
</script>

<!-- Better Browser Banner for Microsoft Internet Explorer (IE) -->
<!--[if IE]>
<script src="s6/js/jquery.microsoft.js"></script>
<![endif]-->



</head>
<body>

<div class="layout">
  <div id="header"></div>
  <div id="footer">
    <div align="right">
      <img src="s6/images/logo.svg" width="200px">
    </div>
  </div>
</div>

<div class="presentation">

  <div class='slide cover'>
    <table width="90%" height="90%" border="0" align="center">
      <tr>
        <td>
          <div align="center">
            <h1><font color="#808db5">Service robot system with an informationally structured environment</font></h1>
          </div>
        </td>
      </tr>
      <tr>
        <td>
          <div align="left">
            Tatsuki IHA, Nozomi TERUYA
            Kono lab
            <hr style="color:#ffcc00;background-color:#ffcc00;text-align:left;border:none;width:100%;height:0.2em;">
          </div>
        </td>
      </tr>
    </table>
  </div>

<div class='slide '>
<!-- === begin markdown block ===

      generated by markdown/1.2.0 on Ruby 2.1.0 (2013-12-25) [x86_64-darwin13.0]
                on 2016-06-03 10:05:59 +0900 with Markdown engine kramdown (1.5.0)
                  using options {}
  -->

<!-- _S9SLIDE_ -->
<h1 id="introduction">1. Introduction</h1>
<ul>
  <li>aging of the population is a common problem in modern societies, and rapidly aging populations and declining birth rates have become more serious in recent years</li>
  <li>for instance, the manpower shortage in hospitals and elderly care facilities has led to the deterioration of quality of life for elderly individuals</li>
  <li>robot technology is expected to play an important role in the development of a healthy and sustainable society</li>
  <li>in particular, daily life assistance for elderly individuals in hospitals and care facilities is one of the most urgent and promising applications for service robots</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-1">1. Introduction</h1>
<ul>
  <li>for a service robot, information about its surrounding, such as the positions of objects, furniture, humans, and other robots is indispensable for safely performing proper service tasks</li>
  <li>however, current sensing technology, especially for cases of robots equipped with external sensors, is not good enough to complete these tasks satisfactorily</li>
  <li>for example, a vision system is susceptible to changes in lighting conditions and the appearances of objects. moreover, the field of vision is rather narrow</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-2">1. Introduction</h1>
<ul>
  <li>although occlusions can be partly solved by sensors on a mobile robot, background changes and unfavorable vibrations of a robot body make processes more difficult</li>
  <li>in addition, the payload of a robot is not so high and computer resources are also limited</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-3">1. Introduction</h1>
<ul>
  <li>fixed sensors in an environment are more stable and can more easily gather information about the environment</li>
  <li>if a sufficient number of sensors can be embedded in the environment in advance, occlusion is no longer a crucial problem</li>
  <li>information required to perform tasks is acquired by distributed sensors and transmitted to a robot on demand</li>
  <li>the concept of making an environment smarter rather than the robot is referred to as an informationally structured environment</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-4">1. Introduction</h1>
<ul>
  <li>an informationally structured environment is a feasible solution for introducing service robots into our daily lives using current technology</li>
  <li>several systems that observe human behavior using distributed sensor systems and provide proper service tasks according to requests from human or emergency detection, which is triggered automatically, have been proposed</li>
  <li>several service robots that act as companions to elderly people or as assistants to humans who require special care have been developed</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-5">1. Introduction</h1>
<ul>
  <li>we also have been developing an informationally structured environment for assisting in the daily life of elderly people in our research project, i.e., the robot town project</li>
  <li>the goal of this project is to develop a distributed sensor network system covering a townsize environment consisting of several houses, buildings, and roads, and to manage robot services appropriately by monitoring events that occur in the environment</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-6">1. Introduction</h1>
<ul>
  <li>events sensed by an embedded sensor system are recorded in the town management system (TMS)</li>
  <li>and appropriate information about the surroundings and instructions for proper services are provided to each robot</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-7">1. Introduction</h1>
<ul>
  <li>we also have been developing an informationally structured platform  in which distributed sensors and actuators are installed to support an indoor service robot</li>
  <li>objects embedded sensors and rfid tags, and all of the data are stored in the TMS database</li>
  <li>a service robot performs various service tasks according to the environmental data stored in the TMS database in collaboration with distributed sensors and actuators, for example, installed in a refrigerator to open a door</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig1.svg" alt="message" width="700" />
</div>

<div style="text-align: center;">
    <img src="./images/fig2.svg" alt="message" width="700" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-8">1. Introduction</h1>
<ul>
  <li>we herein introduce a new town management system called the ROS-TMS</li>
  <li>in this system, the robot operating system (ROS) is adopted as a communication framework between various modules, including distributed sensors, actuators, robots, and databases</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-9">1. Introduction</h1>
<ul>
  <li>thanks to the ROS, we were able to develop a highly flexible and scalable system</li>
  <li>adding or removing modules such as sensors, actuators, and robots, to or from the system is simple and straightforward</li>
  <li>parallelization is also easily achievable</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-10">1. Introduction</h1>
<ul>
  <li>we herein report the followings
    <ul>
      <li>introduction of architecture and components of the ROS-TMS</li>
      <li>object detection using a sensing system of the ROS-TMS</li>
      <li>fetch-and-give task using the motion planning system of the ROS-TMS</li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="introduction-11">1. Introduction</h1>
<ul>
  <li>the remainder of the present paper is organized as follows
    <ul>
      <li>section 2 : presenting related research</li>
      <li>section 3:  we introduce the architecture and components of the ROS-TMS</li>
      <li>section 4:  we describe the sensing system of the ROS-TMS for processing the data acquired from various sensors</li>
      <li>section 5:  describes the robot motion planning system of the ROS-TMS used to design the trajectories for moving, gasping, giving, and avoiding obstacles using the information on the environment acquired by the sensing system</li>
      <li>section 6:  we present the experimental results for service tasks performed by a humanoid robot and the ROS-TMS</li>
      <li>section 7:  concludes the paper</li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research">2. Related research</h1>
<ul>
  <li>a considerable number of studies have been performed in the area of informationally structured environments/spaces to provide human-centric intelligent services</li>
  <li>informationally structured environments are referred to variously as home automation systems, smart homes, ubiquitous robotics, kukanchi, and intelligent spaces, depending on the field of research and the professional experience of the researcher</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-1">2. Related research</h1>
<ul>
  <li>home automation systems or smart homes are popular systems that centralize the control of lighting, heating, air conditioning, appliances, and doors, for example, to provide convenience, comfort, and energy savings</li>
  <li>the informationally structured environment can be categorized in this system, but the system is designed to support not only human life but also robot activity for service tasks</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-2">2. Related research</h1>
<ul>
  <li>hashimoto and Lee proposed an intelligent space in 1996</li>
  <li>intelligent spaces (iSpace) are rooms or areas that are equipped with intelligent devices, which enable spaces to perceive and understand what is occurring within them</li>
  <li>these intelligent devices have sensing, processing, and networking functions and are referred to as distributed intelligent networked devices (DINDs)</li>
  <li>one DIND consists of a CCD camera to acquire spatial information and a processing computer, which performs data processing and network interfacing</li>
  <li>these devices observe the position and behavior of both human beings and robots coexisting in the iSpace</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-3">2. Related research</h1>
<ul>
  <li>the concept of a physically embedded intelligent system (PEIS) has been introduced in 2005</li>
  <li>PEIS involves the intersection and integration of three research areas: artificial intelligence, robotics, and ubiquitous computing</li>
  <li>anything that consists of software components with a physical embodiment and interacts with the environment through sensors or actuators/robots is considered to be a PEIS, and a set of interconnected physically embedded intelligent systems is defined as a PEIS ecology</li>
  <li>tasks can be achieved using either centralized or distributed approaches using the PEIS ecology</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-4">2. Related research</h1>
<ul>
  <li>Ubiquitous robotics involves the design and deployment of robots in smart network environments in which everything is interconnected</li>
  <li>define three types of Ubibots
    <ul>
      <li>software robots (Sobots)</li>
      <li>embedded robots (Embots)</li>
      <li>mobile robots (Mobots)</li>
    </ul>
  </li>
  <li>can provide services using various devices through any network, at any place and at any time in a ubiquitous space (u-space)</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-5">2. Related research</h1>
<ul>
  <li>Embots can evaluate the current state of the environment using sensors, and convey that information to users</li>
  <li>Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms</li>
  <li>Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans</li>
  <li>The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-6">2. Related research</h1>
<ul>
  <li>RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment</li>
  <li>the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-7">2. Related research</h1>
<ul>
  <li>the informationally structured environment/space (also referred to as Kukanchi, a Japanese word meaning interactive human-space design and intelligence) has received a great deal of attention in robotics research as an alternative approach to the realization of a system of intelligent robots operating in our daily environment</li>
  <li>human-centered systems require, in particular, sophisticated physical and information services, which are based on sensor networks, ubiquitous computing, and intelligent artifacts</li>
  <li>information resources and accessibility within an environment are essential for people and robots</li>
  <li>the environment surrounding people and robots should have a structured platform for gathering, storing, transforming, and providing information</li>
  <li>such an environment is referred to as an informationally structured space</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-8">2. Related research</h1>
<ul>
  <li>in section 5, we present a coordinate motion planning technique for a fetch-and-give including handing over an object to a person</li>
  <li>the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI)</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-9">2. Related research</h1>
<ul>
  <li>the work that is closest to ours is the one by Dehais et al</li>
  <li>in their study, physiological and subjective evaluation for a handing over task was presented</li>
  <li>the performance of hand-over tasks were evaluated according to three criteria: legibility, safety and physical comfort</li>
  <li>these criteria are represented as fields of cost functions mapped around the human to generate ergonomic hand-over motions</li>
  <li>although their approach is similar to our approach, we consider the additional criteria, that is, the manipulability of both a robot and a human for a comfortable and safety fetch-and-give task</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-10">2. Related research</h1>
<ul>
  <li>the problem of pushing carts using robots has been reported in many studies so far</li>
  <li>the earlier studies in pushing a cart were reported using a single manipulator mounted on a mobile base</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-11">2. Related research</h1>
<ul>
  <li>the problem of towing a trailer has also been discussed as an application of a mobile manipulator and a cart</li>
  <li>this work is close to the approach in this paper, however, a pivot point using a cart is placed in front of the robot in our technique</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-12">2. Related research</h1>
<ul>
  <li>the work that is closest to ours is the one by Scholz et al</li>
  <li>they provided a solution for real time navigation in a cluttered indoor environment using 3D sensing</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="related-research-13">2. Related research</h1>
<ul>
  <li>many previous works focus on the navigation and control problems for movable objects</li>
  <li>On the other hand, we consider the problem including handing over an object to a human using a wagon, and propose a total motion planning technique for a fetch-and-give task with a wagon</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms">3. Overview of the ROS-TMS</h1>
<ul>
  <li>in the present paper, we extend the TMS and develop a new Town Management System called the ROS-TMS</li>
  <li>This system has three primary components
    <ul>
      <li>real-world</li>
      <li>database</li>
      <li>cyber-world</li>
    </ul>
  </li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig3.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-1">3. Overview of the ROS-TMS</h1>
<ul>
  <li>events occurring in the real world, such as user behavior or user requests, and the current situation of the real world are sensed by a distributed sensing system</li>
  <li>the gathered information is then stored in the database</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig3.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-2">3. Overview of the ROS-TMS</h1>
<ul>
  <li>appropriate service commands are planned using the environmental information in the database and are simulated carefully in the cyber world using simulators, such as choreonoid</li>
  <li>service tasks are assigned to service robots in the real world</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig3.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-3">3. Overview of the ROS-TMS</h1>
<ul>
  <li>the following functions are implemented in the ROS-TMS
    <ol>
      <li>Communication with sensors, robots, and databases  </li>
      <li>Storage,revision,backup,and retrieval of real-time information in an environment</li>
      <li>Maintenance and providing information according to individual IDs assigned to each object and robot</li>
      <li>Notification of the occurrence of particular predefined events, such as accidents</li>
      <li>Task schedule function for multiple robots and sensors</li>
      <li>Human-system interaction for user requests</li>
      <li>Real-time task planning for service robots</li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-4">3. Overview of the ROS-TMS</h1>
<ul>
  <li>ROS-TMS has unique features, as described below
    <ul>
      <li>Modularity
        <ul>
          <li>ROS-TMS consists of 73 packages categorized into 11 groups and 151 processing nodes.</li>
          <li>Re-configuration of structures, for instance adding or removing modules such as sensors, actuators, and robots, is simple and straightforward owing to the high flexibility of the ROS architecture</li>
        </ul>
      </li>
      <li>Scalability
        <ul>
          <li>ROS-TMS is designed to have high scalability so that it can handle not only a single room but also a building and a town</li>
        </ul>
      </li>
      <li>Diversity
        <ul>
          <li>diversity: The ROS–TMS supports a variety of sensors and robots</li>
          <li>for instance, Vicon MX (Vicon Motion Systems Ltd.), TopUrg (Hokuyo Automatic), Velodyne 32e (Velodyne Lidar), and Oculus Rift (Oculus VR) are installed in the developed informationally structured platform</li>
        </ul>
      </li>
      <li>Safety
        <ul>
          <li>data gathered from the real world is used to perform simulations in the cyber world in order to evaluate the safety and efficiency of designed tasks</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-5">3. Overview of the ROS-TMS</h1>
<ul>
  <li>ROS-TMS has unique features, as described below
    <ul>
      <li>Privacy protection
        <ul>
          <li>one important restriction in our intelligent environment is to install a small number of sensors to avoid interfering with the daily activity of people and to reduce the invasion of their privacy as far as possible</li>
          <li>we do not install conventional cameras in the environment</li>
        </ul>
      </li>
      <li>Economy
        <ul>
          <li>sensors installed in an environment can be shared with robots and tasks, and thus we do not need to equip individual robots with numerous sensors</li>
          <li>in addition, most sensors are processed by low-cost single-board computers in the proposed system</li>
          <li>this concept has an advantage especially for the system consisting of multiple robots since robots can share the resources in the environment</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-6">3. Overview of the ROS-TMS</h1>
<ul>
  <li>some features such as modularity, scalability, and diversity owe much to ROS’s outstanding features</li>
  <li>on the other hand, economical or processing efficiency strongly depends on the unique features of ROS-TMS, since various information gathered by distributed sensor networks is structured and stored to the database and repeatedly utilized for planning various service tasks by robots or other systems</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="overview-of-the-ros-tms-7">3. Overview of the ROS-TMS</h1>
<ul>
  <li>ROS-TMS is composed of five components
    <ul>
      <li>User</li>
      <li>Sensor</li>
      <li>Robot</li>
      <li>Task</li>
      <li>Data</li>
    </ul>
  </li>
  <li>components are also composed of sub-modules
    <ul>
      <li>such as the User Request sub-module for the user component</li>
    </ul>
  </li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig4.svg" alt="message" width="450" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="sensing-system">4. Sensing system</h1>
<ul>
  <li>sensing system (TMS_SS) is a component of the ROS-TMS that processes the data acquired from various environment sensors</li>
  <li>TMS_SS is composed of three sub-packages
    <ul>
      <li>Floor sensing system (FSS)</li>
      <li>Intelligent cabinet system (ICS)</li>
      <li>Object detection system (ODS)</li>
    </ul>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="floor-sensing-systemfss">4.1 Floor sensing system(FSS)</h1>
<ul>
  <li>current platform is equipped with a floor sensing system to detect objects on the floor and people walking around</li>
  <li>this sensing systems is composed of a laser range finder located on one side of the room and a mirror installed along another side of the room</li>
  <li>this configuration allows a reduction of dead angles of the LRF and is more robust against occlusions</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig6.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="floor-sensing-systemfss-1">4.1 Floor sensing system(FSS)</h1>
<ul>
  <li>people tracking is performed by first applying static background subtraction and then extracting clusters in the remainder of the measurements</li>
  <li>this system can measure the poses of the robot and movable furniture such as a wagon using tags, which have encoded reflection patterns optically identified by the LRF</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig6.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="intelligent-cabinet-system-ics">4.2. Intelligent cabinet system (ICS)</h1>
<ul>
  <li>the cabinets installed in the room are equipped with RFID readers and load cells to detect the types and positions of the objects in the cabinet</li>
  <li>every object in the environment has an RFID tag containing a unique ID that identifies it</li>
  <li>this ID is used to retrieve the attributes of the object, such as its name and location in the database</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="intelligent-cabinet-system-ics-1">4.2. Intelligent cabinet system (ICS)</h1>
<ul>
  <li>using the RFID readers, we can detect the presence of a new object inside the cabinet</li>
  <li>the load cell information allows us to determine its exact position inside the cabinet</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig7.svg" alt="message" width="1200" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-detection-system-ods">4.3. Object detection system (ODS)</h1>
<ul>
  <li>available for detecting objects such as those placed on a desk, the object detection system using a RGB-D camera on a robot is provided in this platform</li>
  <li>in this system, a newly appeared object or movement of an object is detected as a change in the environment</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig8.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-detection-system-ods-1">4.3. Object detection system (ODS)</h1>
<ul>
  <li>the steps of the change detection process are as follows
    <ol>
      <li>Identification of furniture</li>
      <li>Alignment of the furniture model</li>
      <li>Object extraction by furniture removal</li>
      <li>Segmentation of objects</li>
      <li>Comparison with the stored information</li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="identification-of-furniture">4.3.1. Identification of furniture</h1>
<ul>
  <li>possible to identify furniture based on the position and posture of robots and furniture in the database</li>
  <li>using this information, robot cameras determine the range of the surrounding environment that is actually being measured</li>
  <li>the system superimposes these results and the position information for furniture to create an updated furniture location model</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="identification-of-furniture-1">4.3.1. Identification of furniture</h1>
<ul>
  <li>the point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b)</li>
  <li>After merging the point cloud, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig9.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="alignment-of-the-furniture-model">4.3.2. Alignment of the furniture model</h1>
<ul>
  <li>We scan twice for gathering point cloud datasets of previous and current scene</li>
  <li>in order to detect the change in the newly acquired information and stored information, it is necessary to align two point cloud datasets obtained at different times because these data are measured from different camera viewpoints</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="alignment-of-the-furniture-model-1">4.3.2. Alignment of the furniture model</h1>
<ul>
  <li>in this method, we do not try to directly align the point cloud data, but rather to align the data using the point cloud model for the furniture</li>
  <li>the reason for this is that we could not determine a sufficient number of common areas by simply combining the camera viewpoints from the two point cloud datasets and can also reduce the amount of information that must be stored in memory</li>
  <li>using the aligned point cloud model, it is possible to use the point cloud data for objects located on the furniture, without having to use the point cloud data for furniture from the stored data</li>
  <li>alignment of the furniture model is performed using the ICP algorithm</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-extraction-by-furniture-removal">4.3.3. Object extraction by furniture removal</h1>
<ul>
  <li>after alignment, all points corresponding to furniture are removed to extract an object</li>
  <li>the system removes furniture according to segmentation using color information and three-dimensional positions</li>
  <li>more precisely, the point cloud is converted to a RGB color space and then segmented using a region-growing method</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="object-extraction-by-furniture-removal-1">4.3.3. Object extraction by furniture removal</h1>
<ul>
  <li>each of the resulting segments is segmented based on the XYZ space</li>
  <li>system then selects only those segments that overlap with the model and then removes these segments</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig10.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="segmentation-of-objects">4.3.4. Segmentation of objects</h1>
<ul>
  <li>after performing the until now processing, only the points associated with objects placed on furniture remain</li>
  <li>these points are further segmented based on XYZ space</li>
  <li>the resulting segments are stored in the database</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>finally, the system associates each segment from the previously stored information with the newly acquired information</li>
  <li>system finds the unmatched segments and captures the movement of objects that has occurred since the latest data acquisition</li>
  <li>segments that did not match between the previous dataset and the newly acquired dataset, reflect objects that were moved, assuming that the objects were included in the previously stored dataset</li>
  <li>segments that appear in the most recent dataset, but not in the previously stored dataset, reflect objects that were recently placed on the furniture</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-1">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>set of segments that are included in the association process are determined according to the center position of segments</li>
  <li>for the segments sets from the previous dataset and the newly acquired dataset, the association is performed based on a threshold distance between their center positions, considering the shape and color of the segments as the arguments for the association</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-2">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>use an elevation map that describes the height of furniture above the reference surface level to represent the shape of the object</li>
  <li>reference surface level of furniture is, more concretely, the top surface of a table or shelf, the seat of a chair</li>
  <li>elevation map is a grid version of the reference surface level and is a representation of the vertical height of each point with respect to the reference surface level on each grid</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig11.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-3">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>comparison is performed on the elevation map for each segment, taking into consideration the variations in size, the different values obtained from each grid, and the average value for the entire map</li>
  <li>the color information used to analyze the correlation between segments is the hue (H) and saturation (S)</li>
  <li>Using these H-S histograms, the previous data and the newly acquired data are compared, allowing the system to determine whether it is dealing with the same objects</li>
</ul>

<div style="text-align: center;">
    <img src="./images/fig11.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-4">4.3.5. Comparison with the stored infomation</h1>
<ul>
  <li>bhattacharyya distance BC(p, q) within H-S histograms(p,q) is used for determining the similarity between histograms and is calculated according to Eq. (1)</li>
  <li>once distance values are calculated, the object can be assumed to be the same as for the case in which the degree of similarity is equal to or greater than the threshold value</li>
</ul>

<div style="text-align: center;">
    <img src="./images/eq1.svg" alt="message" width="800" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="comparison-with-the-stored-infomation-5">4.3.5. Comparison with the stored infomation</h1>

<div style="text-align: center;">
    <img src="./images/fig12.svg" alt="message" width="600" />
</div>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="robot-motion-planning">5. Robot motion planning</h1>
<ul>
  <li>Robot motion planning (TMS_RP) is the component of the ROS–TMS that calculates the movement path of the robot and the trajectories of the robot arm for moving, giving, and avoiding obstacles based on information acquired from TMS_SS</li>
  <li>We consider the necessary planning to implement services such as fetch-and-give tasks because such tasks are among the most frequent tasks required by elderly individuals in daily life.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="robot-motion-planning-1">5. Robot motion planning</h1>
<ul>
  <li>Robot motion planning includes wagons for services that can carry and deliver a large amount of objects, for example, at tea time or handing out towels to residents in elderly care facilities as shown in Fig. 14a<br />
<img src="./images2/fig14.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="robot-motion-planning-2">5. Robot motion planning</h1>
<ul>
  <li>Robot motion planning consists of sub-planning, integration, and evaluation of the planning described below to implement the fetch-and-give task.<br />
    <ol>
      <li>Grasp planning to grip a wagon  </li>
      <li>Position planning for goods delivery  </li>
      <li>Movement path planning  </li>
      <li>Path planning for wagons  </li>
      <li>Integration of planning  </li>
      <li>Evaluation of efficiency and safety  </li>
    </ol>
  </li>
  <li>Each planning, integration, and evaluation process uses environment data obtained from TMS_DB and TMS_SS.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="grasp-planning-to-grip-a-wagon">5.1. Grasp planning to grip a wagon</h1>
<ul>
  <li>In order for a robot to push a wagon, the robot needs to grasp the wagon at first.</li>
  <li>a robot can push a wagon in a stable manner if the robot grasps the wagon from two poles positioned on its sides.</li>
  <li>Thus, the number of base position options for the robot with respect to the wagon is reduced to four (i) as shown in Fig. 14.
<img src="./images2/fig14.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="grasp-planning-to-grip-a-wagon-1">5.1. Grasp planning to grip a wagon</h1>
<ul>
  <li>The position and orientation of the wagon, as well as its size, is managed using the ROS–TMS database. Using this information, it is possible to determine the correct relative position.</li>
  <li>Based on the wagon direction when the robot is grasping its long side, valid candidate points can be determined using Eqs.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="grasp-planning-to-grip-a-wagon-2">5.1. Grasp planning to grip a wagon</h1>
<ul>
  <li>Eq. (2) through (4) below (i=0,1,2,3). Here, R represents the robot, and W represents the wagon. Subscripts x, y, and θ represent the corresponding x-coordinate, y-coordinate, and posture (rotation of the z-axis).
<img src="./images2/eq234.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="grasp-planning-to-grip-a-wagon-3">5.1. Grasp planning to grip a wagon</h1>
<ul>
  <li>Fig. 13 shows the positional relationship between the robot and the wagon, given i=2.
<img src="./images2/fig13.png" alt="opt" width="90%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery">5.2. Position planning for goods delivery</h1>
<ul>
  <li>In order to hand over goods to a person, it is necessary to plan both the position of the goods to be delivered and the base position of the robot according to the person’s position.</li>
  <li>Using manipulability as an indicator for this planning, the system plans the position of the goods relative to the base position.</li>
  <li>Manipulability is represented by the degree to which hands/fingers can move when each joint angle is changed.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-1">5.2. Position planning for goods delivery</h1>
<ul>
  <li>When trying to deliver goods in postures with high manipulability, it is easier to modify the motion, even when small gaps exist between the robot and the person.</li>
  <li>We assume the high manipulability of the arm of the person makes him more comfortable for grasping goods. Their relation is represented in Eqs. (5) and (6).</li>
  <li>The velocity vector V corresponds to the position of hands, and  Q is the joint angle vector.
<img src="./images2/eq56.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-2">5.2. Position planning for goods delivery</h1>
<ul>
  <li>If the arm has a redundant degree of freedom, an infinite number of joint angle vectors corresponds to just one hand position.</li>
  <li>When solving this issue, we calculate the posture that represents the highest manipulability within the range of possible joint angle movements.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-3">5.2. Position planning for goods delivery</h1>
<ul>
  <li>The planning procedure for the position of goods and the position of robots using manipulability is as follows:
    <ol>
      <li>The system maps the manipulability that corresponds to the robots and each person on the local coordinate system.</li>
      <li>Both manipulability maps are integrated, and the position of goods is determined.</li>
      <li>Based on the position of goods, the base position of the robot is determined.</li>
    </ol>
  </li>
  <li>We set the robot as the origin of the robot coordinate system, assuming the frontal direction as the x-axis and the lateral direction as the y-axis.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-4">5.2. Position planning for goods delivery</h1>
<ul>
  <li>This mapping is superimposed along the z-axis, which is the height direction, as shown in Fig. 15b.
<img src="./images2/fig15.png" alt="opt" width="80%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-5">5.2. Position planning for goods delivery</h1>
<ul>
  <li>The next step is to determine, using the manipulability map, the position of the goods that are about to be delivered.</li>
  <li>As shown in Fig. 16a, we take the maximum manipulability value according to each height, and retain the XY coordinates of each local coordinate system.</li>
  <li>These coordinates represent the relationship between the base position and the positions of the hands.
<img src="./images2/fig16.png" alt="opt" width="80%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-6">5.2. Position planning for goods delivery</h1>
<ul>
  <li>According to the calculated height on the manipulability map for a person, the system requests the absolute coordinates of the goods to be delivered, using the previously retained relative coordinates of the hands.</li>
  <li>The position of the person that will receive the delivered goods is managed through TMS_SS and TMS_DB, and it is also possible to use this position as a reference point to request the position of the goods by fitting the relative coordinates.</li>
  <li>According to the aforementioned procedure, we can determine the unique position of the goods that are about to be delivered.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-7">5.2. Position planning for goods delivery</h1>
<ul>
  <li>As the final step, the base position of the robot is determined in order to hold out the goods to their previously calculated position.</li>
  <li>According to the manipulability map that corresponds to the height of a specific object, the system retrieves the relationship between the positions of hands and the base position.</li>
  <li>Using the position of the object as a reference point, the robot is able to hold the object out to any determined position if the base position meets the criteria of this relationship.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-8">5.2. Position planning for goods delivery</h1>
<ul>
  <li>Consequently, at the time of delivery, points on the circumference of the position of the object are determined to be candidate points on the absolute coordinate system of the base position.</li>
  <li>Considering all of the prospect points of the circumference, the following action planning, for which the system extracts multiple candidate points, is redundant.</li>
  <li>The best approach is to split the circumference n time, fetch a representative point out of each sector after the split, and limit the number of candidate points.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-9">5.2. Position planning for goods delivery</h1>
<ul>
  <li>After that, the obtained representative points are evaluated as in Eq. (7), while placing special emphasis on safety.</li>
  <li>Here, View is a Boolean value that represents whether the robot enters the field of vision of the target person. If it is inside the field of vision, then View is 1, otherwise View is 0.</li>
  <li>This calculation is necessary because if the robot can enter the field of vision of the target person, then the robot can be operated more easily and the risk of unexpected contact with the robot is also reduced.</li>
  <li>Dhuman represents the distance to the target person, and Dobs represents the distance to the nearest obstacle.
<img src="./images2/eq7.png" alt="opt" width="80%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="position-planning-for-goods-delivery-10">5.2. Position planning for goods delivery</h1>
<ul>
  <li>In order to reduce the risk of contact with the target person or an obstacle, the positions that repre</li>
  <li>If all the candidate points on a given circumference sector result in contact with an obstacle, then the representative points of that sector are not selected.</li>
  <li>According to the aforementioned process, the base position of the robot is planned based on the position of the requested goods.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-robots">5.3. Movement path planning - Path planning for robots</h1>
<ul>
  <li>Path planning for robots that serve in a general living environment requires a high degree of safety, which can be achieved by lowering the probability of contact with persons.</li>
  <li>However, for robots that push wagons, the parameter space that uniquely defines this state has a maximum of six dimensions, that is, position (x,y) and posture (θ) of a robot and a wagon, and planning a path that represents the highest safety values in such a space is time consuming.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-robots-1">5.3. Movement path planning - Path planning for robots</h1>
<ul>
  <li>Thus, we require a method that produces a trajectory with a high degree of safety, but at the same time requires a short processing time. As such, we use a Voronoi map, as shown in Fig. 18.
<img src="./images2/fig18.png" alt="opt" width="50%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-wagons">5.3. Movement path planning - Path planning for wagons</h1>
<ul>
  <li>In order to be able to plan for wagons in real time, we need to reduce the dimensions of the path search space.</li>
  <li>The parameters that uniquely describe the state of a wagon pushing robot can have a maximum of six dimensions, but in reality the range in which the robot can operate the wagon is more limited.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-wagons-1">5.3. Movement path planning - Path planning for wagons</h1>
<ul>
  <li>We set up a control point, as shown in Fig. 19, which fixes the relative positional relationship of the robot with the control point.
<img src="./images2/fig19.png" alt="opt" width="90%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-wagons-2">5.3. Movement path planning - Path planning for wagons</h1>
<ul>
  <li>The operation of the robot is assumed to change in terms of the relative orientation (Wθ) of the wagon with respect to the robot.</li>
  <li>The range of relative positions is also limited.</li>
  <li>Accordingly, wagon-pushing robots are presented in just four dimensions, which shortens the search time for the wagon path planning.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-wagons-3">5.3. Movement path planning - Path planning for wagons</h1>
<ul>
  <li>Path planning for wagon-pushing robots uses the above-mentioned basic path and is executed as follows:
    <ol>
      <li>The start and end points are established.</li>
      <li>The path for each robot along the basic path is planned.</li>
      <li>According to each point on the path estimated in step 2, the position of the wagon control point is determined considering the manner in which the position of the wagon control point fits the relationship with the robot position.
4.
# 5.3. Movement path planning - Path planning for wagons</li>
      <li>If the wagon control point is not on the basic path (Fig. 20a), posture (Rθ) of the robot is changed so that the wagon control point passes along the basic path.</li>
      <li>If the head of the wagon is not on the basic path (Fig. 20b), the relative posture (Wθ) of the wagon is modified so that it passes along the basic path.</li>
      <li>Steps 3 through 5 are repeated until the end point is reached
<img src="./images2/fig20.png" alt="opt" width="50%" /></li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-wagons-4">5.3. Movement path planning - Path planning for wagons</h1>
<ul>
  <li>Fig. 21 shows the results of wagon path planning, using example start and end points.
<img src="./images2/fig21.png" alt="opt" width="70%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="movement-path-planning---path-planning-for-wagons-5">5.3. Movement path planning - Path planning for wagons</h1>
<ul>
  <li>Using this procedure we can simplify the space search without sacrificing the safety of the basic path diagram.</li>
  <li>The actual time required to calculate the path of a single robot was 1.10 (ms).</li>
  <li>the time including the wagon path planning was 6.41 (ms).</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="integration-of-planning">5.4. Integration of planning</h1>
<ul>
  <li>We perform operation planning for overall item-carrying action, which integrates position, path and arm motion planning.
    <ol>
      <li>Perform wagon grip position planning in order for the robot to grasp a wagon loaded with goods.</li>
      <li>Perform position planning for goods delivery. The results of these work position planning tasks becomes the candidate movement target positions for the path planning of the robot and the wagon.</li>
      <li>Perform an action planning that combines the above-mentioned planning tasks, from the initial position of the robot to the path the robot takes until grasping the wagon, and the path the wagon takes until the robot reaches the position at which the robot can deliver the goods.</li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="integration-of-planning-1">5.4. Integration of planning</h1>
<ul>
  <li>For example
if there are four candidate positions for wagon gripping and four candidate positions for goods delivery around the target person,
then we can plan 16 different actions, as shown in Fig. 22. The various action sequences obtained from this procedure are then evaluated to choose the optimum sequence.
<img src="./images2/fig22.png" alt="opt" width="70%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="evaluation-of-efficiency-and-safety">5.5. Evaluation of efficiency and safety</h1>
<ul>
  <li>We evaluate each candidate action sequence based on efficiency and safety, as shown in Eq. (8).</li>
  <li>The α,β,γ are respectively the weight values of Length, Rotation and ViewRatio.</li>
  <li>The Length and Rotation represent the total distance traveled and total rotation angle</li>
  <li>The Len-min and Rot-min represent the minimum values of all the candidate action.</li>
  <li>First and second terms of Eq. (8) are the metrics for efficiency of action.</li>
  <li>ViewRatio is the number of motion planning points in the person’s visual field out of total number of motion planning point.
<img src="./images2/eq8.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiments">6. Experiments</h1>
<ul>
  <li>We present the results of fundamental experiments described below using an actual robot and the proposed ROS–TMS.
    <ol>
      <li>Experiment to detect changes in the environment</li>
      <li>Experiment to examine gripping and delivery of goods</li>
      <li>Simulation of robot motion planning</li>
      <li>Service experiments</li>
      <li>Verification of modularity and scalability</li>
    </ol>
  </li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-detect-changes-in-the-environment">6.1. Experiment to detect changes in the environment</h1>
<ul>
  <li>We conducted experiments to detect changes using ODS (Section  4.3) with various pieces of furniture.</li>
  <li>We consider six pieces of target furniture, including two tables, two shelves, one chair, and one bed.</li>
  <li>For each piece of furniture, we prepared 10 sets of previously stored data and newly acquired data of kinds of goods including books, snacks, cups, etc., and performed point change detection separately for each set.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-detect-changes-in-the-environment-1">6.1. Experiment to detect changes in the environment</h1>
<ul>
  <li>As the evaluation method, we considered the ratio of change detection with respect to the number of objects that were changed (change detection ratio).</li>
  <li>We also considered over-detection, which occurs when the system detects a change that has actually not occurred.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-detect-changes-in-the-environment-2">6.1. Experiment to detect changes in the environment</h1>
<ul>
  <li>The change detection ratios for each furniture type are as follows: 93.3% for tables, 93.4% for shelves, 84.6% for chairs, and 91.3% for beds.
<img src="./images2/table3.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-detect-changes-in-the-environment-3">6.1. Experiment to detect changes in the environment</h1>
<ul>
  <li>The sections enclosed by circles in each image represent points that actually underwent changes.
<img src="./images2/fig23.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-examine-gripping-and-delivery-of-goods">6.2. Experiment to examine gripping and delivery of goods</h1>
<ul>
  <li>We performed an operation experiment in which a robot grasps an object located on a wagon and delivers the object to a person.</li>
  <li>As a prerequisite for this service, the goods are assumed to have been placed on the wagon, and their positions are known in advance.</li>
  <li>After performing the experiment 10 times, the robot successfully grabbed and delivered the object in all cases.
<img src="./images2/fig24.png" alt="opt" width="100%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-examine-gripping-and-delivery-of-goods-1">6.2. Experiment to examine gripping and delivery of goods</h1>
<ul>
  <li>We measured the displacement of the position of goods (Ox or Oy in Fig. 25) and the linear distance (d) between the measured value and the true value at the time of delivery,to verify the effect of rotation errors or arm posture errors.</li>
</ul>

<p><img src="./images2/fig25.png" alt="opt" width="50%" />
<img src="./images2/table4.png" alt="right" width="90%" /></p>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="experiment-to-examine-gripping-and-delivery-of-goods-2">6.2. Experiment to examine gripping and delivery of goods</h1>
<ul>
  <li>The distance error of the position of the goods at the time of delivery was 35.8 mm.</li>
  <li>According to the manipulability degree, it is possible to cope with these errors, because the system plans a delivery posture with some extra margin in which persons and robots can move their hands.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="simulation-of-robot-motion-planning">6.3. Simulation of robot motion planning</h1>
<ul>
  <li>We set up one initial position for the robot (Rx,Ry,Rθ)=(1000mm,1000mm, 0°) , the wagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°) , and the target person  (Hx,Hy,Hθ)=(1400mm,2500mm, -90°) and assume the person is in a sitting state.</li>
  <li>the range of vision of this person is shown in Fig. 26b by the red area.
<img src="./images2/fig26.png" alt="opt" width="90%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="simulation-of-robot-motion-planning-1">6.3. Simulation of robot motion planning</h1>
<ul>
  <li>The action planning result that passes over wagon grip candidate 1
<img src="./images2/fig27.png" alt="opt" width="90%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="simulation-of-robot-motion-planning-2">6.3. Simulation of robot motion planning</h1>
<ul>
  <li>The action planning result that passes over wagon grip candidate 2
<img src="./images2/fig28.png" alt="opt" width="90%" /></li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="simulation-of-robot-motion-planning-3">6.3. Simulation of robot motion planning</h1>
<ul>
  <li>Furthermore, the evaluation values that changed the weight of each evaluation for each planning result are listed in Table 5, Table 6 and Table 7.</li>
</ul>

<p><img src="./images2/table5.png" alt="right" width="50%" />
<img src="./images2/table6.png" alt="right" width="50%" />
<img src="./images2/table7.png" alt="right" width="70%" /></p>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="simulation-of-robot-motion-planning-4">6.3. Simulation of robot motion planning</h1>
<ul>
  <li>The actions of Plan 2–3 were the most highly evaluated (Table 5).</li>
  <li>Fig. 28a and d indicate that all of the actions occur within the field of vision of the person.</li>
  <li>Since the target person can monitor the robot’s actions at all times, the risk of the robot unexpectedly touching a person is lower, and if the robot misses an action, the situation can be dealt with immediately.</li>
  <li>The action plan chosen from the above results according to the proposed evaluation values exhibits both efficiency and high safety.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="service-experiments">6.4. Service experiments</h1>
<p>We performed a service experiment for the carriage of goods, in accordance with the combined results of these planning sequences. The state of the sequence of actions is shown in Fig. 29.
<img src="./images2/fig29.png" alt="right" width="100%" /></p>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="service-experiments-1">6.4. Service experiments</h1>
<ul>
  <li>This service was carried out successfully, avoiding any contact with the environment.</li>
  <li>The total time for the task execution is 312 sec in case the maximum velocity of SmartPal-V is limited to 10 mm/sec in terms of safety.</li>
  <li>The robot position was confirmed to always be within the range of vision of the subject during execution.</li>
  <li>Accordingly, we can say that the planned actions had an appropriate level of safety.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="service-experiments-2">6.4. Service experiments</h1>
<ul>
  <li>There was a margin for the movement of hands, as shown in Fig. 29f, for which the delivery process could appropriately cope with the movement errors of the robot.</li>
  <li>In reality, the maximum error from desired trajectory was about 0.092 m in the experiments.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="verification-of-modularity-and-scalability">6.5. Verification of modularity and scalability</h1>
<ul>
  <li>We built the ROS–TMS for three types of rooms to verify its high modularity and scalability.</li>
  <li>Thanks to high flexibility and scalability of the ROS–TMS, we could set up these various environments in a comparatively short time.</li>
</ul>

<p><img src="./images2/fig30.png" alt="right" width="100%" />
<img src="./images2/fig31.png" alt="right" width="100%" /></p>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="conclusions">7. Conclusions</h1>
<ul>
  <li>In the present paper, we have introduced a service robot system with an informationally structured environment named ROS–TMS that is designed to support daily activities of elderly individuals.</li>
  <li>The room considered herein contains several sensors to monitor the environment and a person.</li>
  <li>The person is assisted by a humanoid robot that uses information about the environment to support various activities.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="conclusions-1">7. Conclusions</h1>
<ul>
  <li>In the present study, we concentrated on detection and fetch-and-give tasks, which we believe will be among most commonly requested tasks by the elderly in their daily lives.</li>
  <li>We have presented the various subsystems that are necessary for completing this task and have conducted several independent short-term experiments to demonstrate the suitability of these subsystems, such as a detection task using a sensing system and a fetch-and-give task using a robot motion planning system of the ROS–TMS.</li>
</ul>


</div>
<div class='slide '>
<!-- _S9SLIDE_ -->
<h1 id="conclusions-2">7. Conclusions</h1>
<ul>
  <li>Currently, we adopt a deterministic approach for choosing proper data from redundant sensory information based on the reliability pre-defined manually.</li>
  <li>Our future work will include the extension to the probabilistic approach for fusing redundant sensory information.</li>
  <li>Also, we intend to design and prepare a long-term experiment in which we can test the complete system for a longer period of time</li>
</ul>
<!-- === end markdown block === -->
</div>


</div><!-- presentation -->
</body>
</html>