changeset 10:3bee23948f70

nozomi-finish
author Nozomi Teruya <e125769@ie.u-ryukyu.ac.jp>
date Fri, 03 Jun 2016 10:06:35 +0900
parents 8b5af40f3a04
children 5fea5aaff066
files presen_text.md s6/themes/projection.css slide.html slide.md
diffstat 4 files changed, 1154 insertions(+), 17 deletions(-) [+]
line wrap: on
line diff
--- a/presen_text.md	Fri Jun 03 05:16:27 2016 +0900
+++ b/presen_text.md	Fri Jun 03 10:06:35 2016 +0900
@@ -55,7 +55,7 @@
 
 - Several service robots that act as companions to elderly people or as assistants to humans who require special care have been developed [13–18].
 いくつかのサービスロボットは高齢者への付き添いや、特別なケアを必要としている人のAssistantとして開発されている
- 
+
 
 # p7
 - We also have been developing an informationally structured environment for assisting in the daily life of elderly people in our research project, i.e., the Robot Town Project [2,19].
@@ -67,7 +67,7 @@
 
 #p8
 - Events sensed by an embedded sensor system are recorded in the Town Management System (TMS),
-埋め込まれたセンサーシステムによって感知されたイベントは TMS(Town management System) に記録され、 
+埋め込まれたセンサーシステムによって感知されたイベントは TMS(Town management System) に記録され、
 
 - appropriate information about the surroundings and instructions for proper services are provided to each robot [2].
 周囲の適切な情報と適切なサービスの支持は各ロボットに提供される。
@@ -81,7 +81,7 @@
 オブジェクトは埋め込まれたセンサまたはRFIDタグによって検出され、すべてのデータはTMSデータベースに格納される
 
 - A service robot performs various service tasks according to the environmental data stored in the TMS database in collaboration with distributed sensors and actuators, for example, installed in a refrigerator to open a door.
-サービスロボットは設置された冷蔵のドアを開けるといった、様々なサービスのタスクを 分散型センサーとアクチュエータの共同の TMS データベースに格納されている環境データに応じて実行する。 
+サービスロボットは設置された冷蔵のドアを開けるといった、様々なサービスのタスクを 分散型センサーとアクチュエータの共同の TMS データベースに格納されている環境データに応じて実行する。
 
 
 #p10
@@ -114,14 +114,14 @@
 ROS、TMSの判断システムを使ったオブジェクト検出
 
 • Fetch-and-give task using the motion planning system of the ROS–TMS.
-ROS、TMS の動作計画システムを使ったFetch and give task 
+ROS、TMS の動作計画システムを使ったFetch and give task
 
 
 #p13
 The remainder of the present paper is organized as follows.
 本論文は以下のように構成されている
 
-After presenting related research in Section 2, 
+After presenting related research in Section 2,
 第二章で関連研究を提示し
 
 we introduce the architecture and components of the ROS–TMS in Section 3.  
@@ -347,7 +347,7 @@
 The ROS-TMS has unique features such as high scalability and flexibility, as described below.
 ROS-TMS は以下に説明するようなユニークな機能を備えている
 
-・Modularity: The ROS–TMS consists of 73 packages categorized into 11 groups and 151 processing nodes. 
+・Modularity: The ROS–TMS consists of 73 packages categorized into 11 groups and 151 processing nodes.
 Modularity: ROS-TMS は 11グループと151の処理ノードに分類された73個のパッケージで構成される
 
 Re-configuration of structures, for instance adding or removing modules such as sensors, actuators, and robots, is simple and straightforward owing to the high flexibility of the ROS architecture.
@@ -474,7 +474,7 @@
 2. Alignment of the furniture model
 家具モデルのアライメント
 
-3. Object extraction by furniture removal 
+3. Object extraction by furniture removal
 家具の除去によるオブジェクト抽出
 
 4. Segmentation of objects
@@ -495,7 +495,7 @@
 その後、システムは更新された家具の位置モデルを作成するために、この結果と 家具の位置情報を重ねる
 
 # p44
-The point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b). 
+The point cloud (Fig. 9a) acquired from the robot is superimposed with the furniture’s point cloud model (Fig. 9b).
 ロボットが取得した点群(ポイントクラウド)(Fig. 9a) に 家具の点群モデル(Fig. 9b)を重ねあわせる
 
 After merging the point cloud data (Fig. 9a) and the point cloud model (Fig. 9b), as shown in Fig. 9c, the system deletes all other points except for the point cloud model for the furniture and limits the processing range from the upcoming steps.
@@ -625,3 +625,308 @@
 
 The distances shown in bold represent the values for which it is regarded as the same object.
 太字で示される距離はそれが同じオブジェクトとみなされた値を表している
+
+
+-----------以下Dpop-----------
+5章:計画の内容
+# p56
+Robot motion planning (TMS_RP)は、TMS_SSから取得された値に基づいてロボットアームが動いたり掴んだり障害物を回避したりするロボットの軌跡と移動経路を計算するROS–TMSのコンポーネントである。
+
+日常生活において高齢者に最も必要とされるタスクの1つであるfetch-and-giveタスクのサービスを実装する計画が必要だと考える。
+
+# p57
+また、Robot motion planning には図14aに示すように、高齢者介護施設で住居者にタオルを配ったりティータイムのときのように大量のオブジェクトを運搬・提供する用のワゴンが含まれている
+
+# p58
+Robot motion planningは以下で説明するfetch-and-giveタスク実装のためのいくつかのサブ計画、統合計画、プランニングの評価で構成される。
+
+1.ワゴンの把握計画
+2.物品配達のための位置計画
+3.移動経路計画
+4.ワゴンの経路計画
+5.統合計画
+6.安全性と効率性の評価
+
+各プロセスは、TMS_DB と TMS_SSから得られた環境データを使用している。
+
+#58
+5.1.ワゴンの把握計画方法
+ロボットがワゴンを押すために、まずロボットはワゴンを把握しなければならない。
+
+ワゴンを把握できたらワゴンの側に配置された2つのポールを使って安定してロボットはワゴンを押すことができる。
+
+したがって、図14に示すようにワゴンに対して最終的には4つの位置に限定されるように値が選択される。 
+
+#59
+ワゴンの位置と向き、大きさは、ROS-TMSのデータベースを使用して管理されるため、この情報を使用して、正しい相対位置を決定することが可能である。
+
+ロボットがワゴンの長い側面を把握したときのワゴンの方向基づいて、有効は候補点を式を用いて決定することができる。
+
+#60
+式2~4はi=0,1,2,3の値をとる。ここではロボットをR、ワゴンをWで表している。
+添字x,yおよびθは、x座標、y座標そして姿勢(z軸の回転)を表している。
+
+#61
+図13はi=2が与えられた時のロボットとワゴンの位置関係を示している。
+
+#62
+人にものを受け渡すためには、人の位置によってロボットの基本位置と配達されるべき物の位置の両方を計画する必要がある。
+
+指標として可動域を使用し、システムは基本位置への物品の位置を計画する。
+
+可動域は、各関節の角度を変化させたときに手・指が動かせる角度によって表される。
+
+#63
+高い操作性を持った姿勢で物品を配達すれば、ロボットと人の間に誤差が存在する場合であっても、動きを修正するのは簡単である。
+
+我々は人間の腕の高い操作性が物を掴むためにより適していると考える。
+これらの関係は式5、6で表され、速度ベクトルvは手の位置に対応し、qは関節角度ベクトルに対応する。
+
+#64
+もし腕が冗長自由度なら、無数の関節角度ベクトルは1つの手の位置に対応する。
+
+したがって、この問題を解決するときは、関節角の動ける範囲内で高い操作性を示す姿勢を計算する。
+
+#65
+物品の位置と可動域を用いるロボットの位置ためのこの計画手順は次のようになる。
+1.システムはローカル座標系に各人間とロボットを対応した可動域をマップする。
+2.両方の可動域マップが統合され、物品の位置が決定される。
+3.物品の位置にもとづいて、ロボットの基本位置が決定される。
+y軸は横方向を、x軸は正面方向とした、ロボットの座標系の原点にロボットをセットする。
+
+#66
+図15aに示すように、XY平面上の各位置で、手によって運ばれたオブジェクトの存在する状況のために可動域がマップされる。
+図15bで示すように、マッピングは高さ方向であるz軸にそって重ね合わされている。
+これらの座標は手の位置と基本位置の位置関係を表している。
+
+
+#67
+次のステップは可動域マップを用いて配達される物の位置決定することである。
+図16bで示すように、ロボットと人間のための可動域マップを重ねあわせ人の座標に同じ処理を適応する。
+そうすることで、例えば人が立っていたり座っていたりしていても、対象者の顔の位置を基準にし、対応するように合わせることによって可動域マップ上のz軸の値を補うことができる。
+図16aで示すように、各ローカル座標系のXY座標を持ち、各高さによる値で最大操作性を取っている。
+これらの座標は手の位置と基本位置の位置関係を表している。
+図16bで示すように、ロボットと人間のための操作性マップを重ねあわせ人の座標に同じ処理を適応する。
+そうすることで、例えば人が立っていたり座っていたりしていても、対象者の顔の位置を基準にし、対応するように合わせることによって可動域マップ上のz軸の値を補うことができる。
+その結果、物品を配送するときに使用される絶対座標系における高さzは、可動域の最大値のの合計の高さに相当する。
+
+#68
+人のために可動域マップ上に計算された高さに従って、システムは前もって保持した手の相対座標を用いて配達物品の絶対座標を要求する。
+
+物品を受け取る人間の位置はTMS_SS と TMS_DBによって管理されています。そしてそれは相対座標を当てはめることにより物品の位置を要求するための基準点としてこの位置を使用することも可能である。
+
+上記の手順に従えば、配達されようとしている物品のユニークな位置を決定できる。
+
+#69
+最後のステップとして、ロボットの基本位置がそれ前に計算された位置に物品を持っていくために決定される。
+特定のオブジェクトの高さに対応する可動域マップに従って、システムは基本位置と手の間の関係を取得する。
+基準点としてオブジェクトの位置を使うことで、基本位置がこの関係の基準を満たしている場合にロボットはどんな位置にもオブジェクトを持ち運ぶことができる
+
+#70
+このため、配達時にオブジェクトの位置の外周上の点は基準位置の絶対座標上の候補点に決定される。
+外周の全ての点を考慮すると、システムが複数の候補点を抽出するための次の行動計画は冗長である。
+最良の方法は、周囲をn回に分割し、分割後各セクターの代表点を取得し、候補点の数を限定することだ。
+
+#71
+その後、得られた代表点は式7のように評価される。これは安全性に重点を置いている。
+ここでは、Viewは対象者の視野にロボットが入るかどうかを表すブール値である。もしそれが視野内にある場合は、Viewは1で、それ以外の場合はViewは0である。
+この計算は、もしロボットが対象者の視野に入った場合れば、予想外の接触による危険も低減されるので必要となる。
+式において、Dhumanは対象者までの距離を表し、Dobsは最も近い障害物までの距離を表す。
+
+#72
+対象者や障害物との接触の危険を低減するために、対象者や障害物への最大距離を表す位置が高く評価される。
+もしも外周セクタに与えられた全ての候補点が障害物との接触が生じた場合、セクターの代表点は選択されない。
+以上の処理に従って、要求された物品の位置に基づいてロボットの基本位置は計画される。
+
+
+# n
+5.3 移動経路の計画 - ロボットの経路計画
+
+一般的な生活環境で働くロボットは人間との接触可能性を下げることで高い安全の経路計画を必要とする。
+
+しかし、6次元の最大値をもつパラメータ空間でロボットとワゴンの位置 (x,y) と姿勢(θ)から高い安全性を示す経路の計画を一意に定義するには時間がかかる。
+
+# n
+つまり安全性の高い軌道を生成す方法が必要だが、同時に処理時間短縮をしなければならない。
+そのために、図18で示すボロノイマップを利用します。
+
+# n
+5.3 移動経路の計画 - ワゴンのための経路計画
+リアルイムにワゴンの計画ができるよう、経路探索空間の次元を減少させる必要がある。
+
+ロボットの状態を一意に記述するパラメータは6次元の最大値を持つことができるが、ロボットがワゴンを操作可能な範囲は実際にはもっと制限されている。
+
+# n
+そこで、図19で示すような制御点を設定する。これは制御点を持つロボットの相対的な位置関係を決定している。
+
+# n
+ロボットの動作は、ロボットに対するワゴンの相対方向(Wθ)を単位としてで変更することが想定される。
+
+さらに、相対位置の範囲も制限されるため、ワゴンの経路計画のための検索時間を短縮し、4次元のみで表現される。
+
+# n
+経路計画は上記の基本的な経路を使用して以下のように実行される。
+1. 開始位置と終了位置が確立される
+2. 基本的な経路に沿って各ロボットの経路が計画される
+3. step2の推定経路上の各点によって、ワゴンの制御点の位置はロボットの位置との関係に適合するように考慮して決定される。
+
+# n
+4. ワゴンの制御点が基本経路(図20a)でない場合、ロボットの姿勢(Rθ)が制御点が基本経路にそって通過するように変更される。
+5. ワゴンのヘッドが基本経路(図 20b)にない場合は、ワゴンの相対的な姿勢(Wθ)が基本経路にそって通過するように変更される。
+6. 終点に到達するまでステップ3~5を繰り返す
+
+# n
+図21は始点と終点の例を使用した経路計画の結果である。
+
+始点を (Rx,Ry,Rθ)=(2380mm,1000mm, 0°)  、終点を(Rx,Ry,Rθ)=(450mm,2300mm, -6°) として
+ワゴンを掴む位置計画と物を運ぶ位置計画の結果を使用する。
+
+(緑の四角で示す)ワゴンの移動軌跡は、(灰色の丸い形示す)ロボットの移動軌跡内であることが確認できる。
+
+# n
+この手順を使用して、我々は基本的な経路図の安全性を損なうことなく空間検索を簡素化できる。
+実際、1つのロボットの経路を計算するのにかかる時間は1.10msで、ワゴンの経路計画も含むと6.41msであった。
+
+# n
+5.4. 統合計画
+位置、腕、経路の移動計画を統合し全体的な物品運搬動作のための動作計画を実行する。
+はじめに、物を載せたワゴンを掴むための位置計画を実行する。
+次に、物品配達のための位置計画を行う。位置計画タスクの結果は、ロボットとワゴンの経路計画のための移動目標の候補位置となる。
+最後に、ロボットが初期位置からワゴンを掴むまでにかかるロボット経路とロボットが商品を届ける位置に到達するまでのワゴンの経路の計画タスクを組み合わせた行動計画を行う。
+
+# n
+例えば、対象者のまわりに商品を配達するための4つの候補点とワゴンを掴むための4つの候補点がある場合、図22に示すように16の異なる動作を計画できる。
+この手順で得られたさまざまな一連の動作はその後最適な動作を選択するために評価をうける。
+
+# n
+5.5. 効率性と安全性の評価
+式8で示すように効率性と安全性にもとづいて各一連動作候補の評価を行う。
+α、β、γはそれぞれのLength、Rotation、ViewRatioの重み値である。
+LengthとRotationは総移動距離塗装回転角度を表す。
+Lenmin とRotminは全てのすべての移動候補の最小値を表す。
+式8の第1項と第2項は動作効率のためのメトリクスである。
+ViewRatioは動作計画ポイントの総数のうち人の視野に入るの動作計画ポイントの数である。
+
+# n
+6章では、ROS–TMSと実際のロボットを用いて以下のような基本的な実験の結果を述べる。
+1.環境変化の検出実験
+2.物を掴んで運ぶ実験
+3.ロボット移動計画のシュミレーション
+4.サービス実験
+5.拡張性とモジュール性の確認
+
+# n
+6.1. 環境変化の検出実験
+4.3章で説明したODSと家具の様々な品を用いて 変化を検出するための実験を行った。
+2つのテーブル、2つの棚、1つの椅子、1つのベッドを含む対象家具の6品で実験。
+家具の要素ごとに、保存されたデータ10セットと、本、スナック、カップなど新しく得たものの種類のデータを前もって用意し、各セット別々に変化検出を行った。
+
+# n
+評価方法として、変化したオブジェクト数に対する変化検出の比を考えた(変化検出率)。
+また、我々はシステムが実際に発生していない変化を検出した場合を過検出とみなした。
+この実験結果を表3に示す。
+それぞれの家具タイプごとの変化検出比は、テーブルは93.3% 、棚は93.4%、椅子は84.6%、ベッドは91.3% であった。
+
+# n
+また、各家具の品ごとの変化検出の例を図23に示す。
+各画像に円で囲まれたセクションは実際に変化を被った点であるため、ちゃんと検出ができている。
+
+# n
+6.2 物を掴んで運ぶ実験
+ロボットがワゴンに位置するオブジェクトを掴み人に渡す動作実験を行った。
+
+このサービスのための前提条件として、物品はワゴンの上に配置されていることが仮定され、その位置があらかじめ知られているものとする。
+
+実験を10回行ったあと、ロボットが全て場合のオブジェクトをつかんで運んぶことに成功した。動作の状態を図24に示す。
+
+# n
+また、物品位置のズレ(図25のOx 、 Oy ) を測定し、腕の姿勢誤差や回転誤差の影響を検証するために、測定した値と配達時の値の間の直線距離(d)を測定した。結果は表4に記載。
+
+# n
+配達時の物品の位置の距離誤差は35.8mmであった。
+可動域に従い、システムがロボットと人が手を動かすことができる余分なマージンをもって配達姿勢を計画するため、これらのエラーに対処することが可能である。
+
+# n
+6.3. ロボット移動計画のシュミレーション
+ロボットの初期位置を(Rx,Ry,Rθ)=(1000mm,1000mm, 0°)、ワゴンの初期位置をwagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°)、対象者の位置を(Hx,Hy,Hθ)=(1400mm,2500mm, -90°)にセットアップし、図26aに示すように対象者を座った状態に仮定した。
+また、このときの人の視野範囲は図26bの赤いエリアで示している。
+
+# n
+図27で候補1のワゴンを掴み移動する行動計画結果を示す。
+aは、ワゴンを掴む位置で、bはワゴンを掴みにいく移動経路である。
+c,d,eは物を対象者のところに運ぶ行動計画の候補である。
+dは対象者の後ろからまわる移動経路が示されている。
+
+# n
+同様に、候補2としてワゴンの逆の辺を掴み移動する行動計画結果を図28に示す。
+ここのc,d,eの候補では全て対象者の前から通る移動経路が示されている。
+
+# n
+さらに、各計画結果のために各評価の重みを変更した評価値をを表5,6,7に記載する.
+Plan No.が先ほどあげた6つの移動経路である。
+計画2-3の行動は最も高く評価された(表5)。次いで2-1が高い。
+対応する計画である図28aとdは人の全ての行動は視野に入っている。
+
+# n
+つまり、対象者からロボットの行動が常に監視できるので、予想外の方にロボットと触れる危険性が低く、ロボットの行動を見逃した場合すぐに対処ができる状況と言える。
+
+それに従って表の結果からこの選択された行動計画は、高い安全性と効率性の両方を示す。
+
+# n
+6.4. Service experiments
+我々は、これらの一連の計画を組み合わせた結果に従い、物品運搬のためのサービス実験を行った。
+一連の動作の状態を図29に示す。
+1.初期位置からオブジェクトを運ぶ位置に移動計画が実行される。a
+2.ロボットはワゴンを掴む位置に初期位置から移動する。ロボットの位置はRGB-D カメラによって補正される
+3.腕の軌道がワゴンを掴むために計画・実行される(図29b)。
+4.ロボットがワゴンを押しながら運ぶ位置に移動する(図29c)。
+5.腕の軌跡がワゴンから離れるてワゴンの上にある対象物を掴むために計画・実行される(図29d)。
+6.腕の軌道が運ばれる位置にオブジェクトを持つために計画・実行される(図29e、f)。
+
+# n
+このサービスは環境との接触を避け運び切ることに成功した。
+安全面のためにSmartPal-V10mm/secに制限されている最大速度の場合、タスク実行のための合計時間は312秒である。
+また、ロボットの位置は常に被験者の視野の中で実行されていた。したがって、計画された行動が安全性の適切なレベルだったということが言える。
+
+# n
+また、図29fで示すように手の動きにも余裕があり、運搬処理はロボットの移動誤差に適切に対処することができた。
+実際には、この実験での目標軌道からの最大誤差は0.092mmであった。
+
+# n
+6.5. Verification of modularity and scalability
+
+高いモジュール性と拡張性を確認するために、3種類の部屋のためのROS-TMSを構築した。
+部屋の構造と大きさ、センサー設定、センサーの種類とロボットは部屋によってすべて異なる。
+部屋の実際の写真とシミュレーションモデルをそれぞれ図30,31に示す。
+
+部屋A(図30a/31a、4m×4m)は18のパッケージとLRFを含む32の処理ノード3つのRFIDタグリーダー、1つのXtionセンサーとヒューマノイドロボットを使用した。
+
+部屋Bは(図30b/31b、4.5m×8m)52のパッケージと2つのLRFを含む93の処理ノード、2つのRFIDタグリーダー、4つのXtionセンサーと10このViconカメラ、2つのヒューマノイドロボット、1つの床クリーニングロボットと冷蔵庫と車いすロボットで構成されている。
+
+部屋Cは(図30c/31c~f、8m×15m)が73のパッケージと8つのLRFを含む151の処理ノード、2つのRFIDタグリーダー、1つのXtionセンサー、5つのKinectセンサー、18のViconカメラ、3つのヒューマノイドロボット、1つの床掃除ロボットとモバイル操作ロボットと車いすロボットが利用されている。
+
+ROS–TMSの高いスケーラビリティと柔軟性のおかげで、我々は短時間で様々な環境を設定することができた。
+
+# n
+# n
+# n
+
+# n
+7. まとめ
+本論文では、高齢者の生活をサポートするように設計されたROS–TMSという情報的に構造化された環境でのサービスロボットを紹介している。
+
+部屋には環境と人をモニタリングするためのいくつかのセンサーが中に含まれているものとみなす。
+
+様々な活動をサポートするために環境の情報を用いたヒューマノイドロボットで人は支援される。
+
+# n
+本研究では、高齢者の生活で最も一般的に要求されるタスクのひとつである検出とfetch-and-giveタスクに焦点をあてた。
+
+タスクを完了するために必要な様々なサブシステムを提案し、これらサブシステムが適していることを実証するために、ROS–TMSのロボットモーションプランニングシステムを使うfetch-and-giveタスクやセンシングシステムを使う検出タスクといった、いくつかの独立した短期的実験を導いた。
+
+# n
+現在我々は、手動で定義された信頼性に基づいた冗長な感覚情報による適切なデータ選択のための決定論的なアプローチを採用している。
+
+今後の課題としては、冗長な感覚情報を融合させるための確率論的アプローチへの拡張がある。
+
+また、完成されたシステムを長期間テストできる長期的な実験の設計と準備をしていきます。
--- a/s6/themes/projection.css	Fri Jun 03 05:16:27 2016 +0900
+++ b/s6/themes/projection.css	Fri Jun 03 10:06:35 2016 +0900
@@ -20,7 +20,7 @@
   top: 0;
   left: 0;
   margin: 0;
-  padding: 0% 4% 0% 4%;
+  padding: 2% 4% 0% 4%;
   /* css note: order is => top right bottom left  */
   -moz-box-sizing: border-box;
   -webkit-box-sizing: border-box;
--- a/slide.html	Fri Jun 03 05:16:27 2016 +0900
+++ b/slide.html	Fri Jun 03 10:06:35 2016 +0900
@@ -4,7 +4,7 @@
    <meta http-equiv="content-type" content="text/html;charset=utf-8">
    <title>Service robot system with an informationally structured environment</title>
 
-<meta name="generator" content="Slide Show (S9) v2.5.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]">
+<meta name="generator" content="Slide Show (S9) v2.5.0 on Ruby 2.1.0 (2013-12-25) [x86_64-darwin13.0]">
 <meta name="author"    content="Tatsuki IHA, Nozomi TERUYA" >
 
 <!-- style sheet links -->
@@ -86,8 +86,8 @@
 <div class='slide '>
 <!-- === begin markdown block ===
 
-      generated by markdown/1.2.0 on Ruby 2.3.1 (2016-04-26) [x86_64-darwin15]
-                on 2016-06-03 05:16:01 +0900 with Markdown engine kramdown (1.11.1)
+      generated by markdown/1.2.0 on Ruby 2.1.0 (2013-12-25) [x86_64-darwin13.0]
+                on 2016-06-03 10:05:59 +0900 with Markdown engine kramdown (1.5.0)
                   using options {}
   -->
 
@@ -453,7 +453,7 @@
 <ul>
   <li>the following functions are implemented in the ROS-TMS
     <ol>
-      <li>Communication with sensors, robots, and databases</li>
+      <li>Communication with sensors, robots, and databases  </li>
       <li>Storage,revision,backup,and retrieval of real-time information in an environment</li>
       <li>Maintenance and providing information according to individual IDs assigned to each object and robot</li>
       <li>Notification of the occurrence of particular predefined events, such as accidents</li>
@@ -818,6 +818,573 @@
 <div style="text-align: center;">
     <img src="./images/fig12.svg" alt="message" width="600" />
 </div>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="robot-motion-planning">5. Robot motion planning</h1>
+<ul>
+  <li>Robot motion planning (TMS_RP) is the component of the ROS–TMS that calculates the movement path of the robot and the trajectories of the robot arm for moving, giving, and avoiding obstacles based on information acquired from TMS_SS</li>
+  <li>We consider the necessary planning to implement services such as fetch-and-give tasks because such tasks are among the most frequent tasks required by elderly individuals in daily life.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="robot-motion-planning-1">5. Robot motion planning</h1>
+<ul>
+  <li>Robot motion planning includes wagons for services that can carry and deliver a large amount of objects, for example, at tea time or handing out towels to residents in elderly care facilities as shown in Fig. 14a<br />
+<img src="./images2/fig14.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="robot-motion-planning-2">5. Robot motion planning</h1>
+<ul>
+  <li>Robot motion planning consists of sub-planning, integration, and evaluation of the planning described below to implement the fetch-and-give task.<br />
+    <ol>
+      <li>Grasp planning to grip a wagon  </li>
+      <li>Position planning for goods delivery  </li>
+      <li>Movement path planning  </li>
+      <li>Path planning for wagons  </li>
+      <li>Integration of planning  </li>
+      <li>Evaluation of efficiency and safety  </li>
+    </ol>
+  </li>
+  <li>Each planning, integration, and evaluation process uses environment data obtained from TMS_DB and TMS_SS.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="grasp-planning-to-grip-a-wagon">5.1. Grasp planning to grip a wagon</h1>
+<ul>
+  <li>In order for a robot to push a wagon, the robot needs to grasp the wagon at first.</li>
+  <li>a robot can push a wagon in a stable manner if the robot grasps the wagon from two poles positioned on its sides.</li>
+  <li>Thus, the number of base position options for the robot with respect to the wagon is reduced to four (i) as shown in Fig. 14.
+<img src="./images2/fig14.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="grasp-planning-to-grip-a-wagon-1">5.1. Grasp planning to grip a wagon</h1>
+<ul>
+  <li>The position and orientation of the wagon, as well as its size, is managed using the ROS–TMS database. Using this information, it is possible to determine the correct relative position.</li>
+  <li>Based on the wagon direction when the robot is grasping its long side, valid candidate points can be determined using Eqs.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="grasp-planning-to-grip-a-wagon-2">5.1. Grasp planning to grip a wagon</h1>
+<ul>
+  <li>Eq. (2) through (4) below (i=0,1,2,3). Here, R represents the robot, and W represents the wagon. Subscripts x, y, and θ represent the corresponding x-coordinate, y-coordinate, and posture (rotation of the z-axis).
+<img src="./images2/eq234.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="grasp-planning-to-grip-a-wagon-3">5.1. Grasp planning to grip a wagon</h1>
+<ul>
+  <li>Fig. 13 shows the positional relationship between the robot and the wagon, given i=2.
+<img src="./images2/fig13.png" alt="opt" width="90%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>In order to hand over goods to a person, it is necessary to plan both the position of the goods to be delivered and the base position of the robot according to the person’s position.</li>
+  <li>Using manipulability as an indicator for this planning, the system plans the position of the goods relative to the base position.</li>
+  <li>Manipulability is represented by the degree to which hands/fingers can move when each joint angle is changed.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-1">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>When trying to deliver goods in postures with high manipulability, it is easier to modify the motion, even when small gaps exist between the robot and the person.</li>
+  <li>We assume the high manipulability of the arm of the person makes him more comfortable for grasping goods. Their relation is represented in Eqs. (5) and (6).</li>
+  <li>The velocity vector V corresponds to the position of hands, and  Q is the joint angle vector.
+<img src="./images2/eq56.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-2">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>If the arm has a redundant degree of freedom, an infinite number of joint angle vectors corresponds to just one hand position.</li>
+  <li>When solving this issue, we calculate the posture that represents the highest manipulability within the range of possible joint angle movements.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-3">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>The planning procedure for the position of goods and the position of robots using manipulability is as follows:
+    <ol>
+      <li>The system maps the manipulability that corresponds to the robots and each person on the local coordinate system.</li>
+      <li>Both manipulability maps are integrated, and the position of goods is determined.</li>
+      <li>Based on the position of goods, the base position of the robot is determined.</li>
+    </ol>
+  </li>
+  <li>We set the robot as the origin of the robot coordinate system, assuming the frontal direction as the x-axis and the lateral direction as the y-axis.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-4">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>This mapping is superimposed along the z-axis, which is the height direction, as shown in Fig. 15b.
+<img src="./images2/fig15.png" alt="opt" width="80%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-5">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>The next step is to determine, using the manipulability map, the position of the goods that are about to be delivered.</li>
+  <li>As shown in Fig. 16a, we take the maximum manipulability value according to each height, and retain the XY coordinates of each local coordinate system.</li>
+  <li>These coordinates represent the relationship between the base position and the positions of the hands.
+<img src="./images2/fig16.png" alt="opt" width="80%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-6">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>According to the calculated height on the manipulability map for a person, the system requests the absolute coordinates of the goods to be delivered, using the previously retained relative coordinates of the hands.</li>
+  <li>The position of the person that will receive the delivered goods is managed through TMS_SS and TMS_DB, and it is also possible to use this position as a reference point to request the position of the goods by fitting the relative coordinates.</li>
+  <li>According to the aforementioned procedure, we can determine the unique position of the goods that are about to be delivered.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-7">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>As the final step, the base position of the robot is determined in order to hold out the goods to their previously calculated position.</li>
+  <li>According to the manipulability map that corresponds to the height of a specific object, the system retrieves the relationship between the positions of hands and the base position.</li>
+  <li>Using the position of the object as a reference point, the robot is able to hold the object out to any determined position if the base position meets the criteria of this relationship.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-8">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>Consequently, at the time of delivery, points on the circumference of the position of the object are determined to be candidate points on the absolute coordinate system of the base position.</li>
+  <li>Considering all of the prospect points of the circumference, the following action planning, for which the system extracts multiple candidate points, is redundant.</li>
+  <li>The best approach is to split the circumference n time, fetch a representative point out of each sector after the split, and limit the number of candidate points.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-9">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>After that, the obtained representative points are evaluated as in Eq. (7), while placing special emphasis on safety.</li>
+  <li>Here, View is a Boolean value that represents whether the robot enters the field of vision of the target person. If it is inside the field of vision, then View is 1, otherwise View is 0.</li>
+  <li>This calculation is necessary because if the robot can enter the field of vision of the target person, then the robot can be operated more easily and the risk of unexpected contact with the robot is also reduced.</li>
+  <li>Dhuman represents the distance to the target person, and Dobs represents the distance to the nearest obstacle.
+<img src="./images2/eq7.png" alt="opt" width="80%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="position-planning-for-goods-delivery-10">5.2. Position planning for goods delivery</h1>
+<ul>
+  <li>In order to reduce the risk of contact with the target person or an obstacle, the positions that repre</li>
+  <li>If all the candidate points on a given circumference sector result in contact with an obstacle, then the representative points of that sector are not selected.</li>
+  <li>According to the aforementioned process, the base position of the robot is planned based on the position of the requested goods.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-robots">5.3. Movement path planning - Path planning for robots</h1>
+<ul>
+  <li>Path planning for robots that serve in a general living environment requires a high degree of safety, which can be achieved by lowering the probability of contact with persons.</li>
+  <li>However, for robots that push wagons, the parameter space that uniquely defines this state has a maximum of six dimensions, that is, position (x,y) and posture (θ) of a robot and a wagon, and planning a path that represents the highest safety values in such a space is time consuming.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-robots-1">5.3. Movement path planning - Path planning for robots</h1>
+<ul>
+  <li>Thus, we require a method that produces a trajectory with a high degree of safety, but at the same time requires a short processing time. As such, we use a Voronoi map, as shown in Fig. 18.
+<img src="./images2/fig18.png" alt="opt" width="50%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-wagons">5.3. Movement path planning - Path planning for wagons</h1>
+<ul>
+  <li>In order to be able to plan for wagons in real time, we need to reduce the dimensions of the path search space.</li>
+  <li>The parameters that uniquely describe the state of a wagon pushing robot can have a maximum of six dimensions, but in reality the range in which the robot can operate the wagon is more limited.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-wagons-1">5.3. Movement path planning - Path planning for wagons</h1>
+<ul>
+  <li>We set up a control point, as shown in Fig. 19, which fixes the relative positional relationship of the robot with the control point.
+<img src="./images2/fig19.png" alt="opt" width="90%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-wagons-2">5.3. Movement path planning - Path planning for wagons</h1>
+<ul>
+  <li>The operation of the robot is assumed to change in terms of the relative orientation (Wθ) of the wagon with respect to the robot.</li>
+  <li>The range of relative positions is also limited.</li>
+  <li>Accordingly, wagon-pushing robots are presented in just four dimensions, which shortens the search time for the wagon path planning.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-wagons-3">5.3. Movement path planning - Path planning for wagons</h1>
+<ul>
+  <li>Path planning for wagon-pushing robots uses the above-mentioned basic path and is executed as follows:
+    <ol>
+      <li>The start and end points are established.</li>
+      <li>The path for each robot along the basic path is planned.</li>
+      <li>According to each point on the path estimated in step 2, the position of the wagon control point is determined considering the manner in which the position of the wagon control point fits the relationship with the robot position.
+4.
+# 5.3. Movement path planning - Path planning for wagons</li>
+      <li>If the wagon control point is not on the basic path (Fig. 20a), posture (Rθ) of the robot is changed so that the wagon control point passes along the basic path.</li>
+      <li>If the head of the wagon is not on the basic path (Fig. 20b), the relative posture (Wθ) of the wagon is modified so that it passes along the basic path.</li>
+      <li>Steps 3 through 5 are repeated until the end point is reached
+<img src="./images2/fig20.png" alt="opt" width="50%" /></li>
+    </ol>
+  </li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-wagons-4">5.3. Movement path planning - Path planning for wagons</h1>
+<ul>
+  <li>Fig. 21 shows the results of wagon path planning, using example start and end points.
+<img src="./images2/fig21.png" alt="opt" width="70%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="movement-path-planning---path-planning-for-wagons-5">5.3. Movement path planning - Path planning for wagons</h1>
+<ul>
+  <li>Using this procedure we can simplify the space search without sacrificing the safety of the basic path diagram.</li>
+  <li>The actual time required to calculate the path of a single robot was 1.10 (ms).</li>
+  <li>the time including the wagon path planning was 6.41 (ms).</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="integration-of-planning">5.4. Integration of planning</h1>
+<ul>
+  <li>We perform operation planning for overall item-carrying action, which integrates position, path and arm motion planning.
+    <ol>
+      <li>Perform wagon grip position planning in order for the robot to grasp a wagon loaded with goods.</li>
+      <li>Perform position planning for goods delivery. The results of these work position planning tasks becomes the candidate movement target positions for the path planning of the robot and the wagon.</li>
+      <li>Perform an action planning that combines the above-mentioned planning tasks, from the initial position of the robot to the path the robot takes until grasping the wagon, and the path the wagon takes until the robot reaches the position at which the robot can deliver the goods.</li>
+    </ol>
+  </li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="integration-of-planning-1">5.4. Integration of planning</h1>
+<ul>
+  <li>For example
+if there are four candidate positions for wagon gripping and four candidate positions for goods delivery around the target person,
+then we can plan 16 different actions, as shown in Fig. 22. The various action sequences obtained from this procedure are then evaluated to choose the optimum sequence.
+<img src="./images2/fig22.png" alt="opt" width="70%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="evaluation-of-efficiency-and-safety">5.5. Evaluation of efficiency and safety</h1>
+<ul>
+  <li>We evaluate each candidate action sequence based on efficiency and safety, as shown in Eq. (8).</li>
+  <li>The α,β,γ are respectively the weight values of Length, Rotation and ViewRatio.</li>
+  <li>The Length and Rotation represent the total distance traveled and total rotation angle</li>
+  <li>The Len-min and Rot-min represent the minimum values of all the candidate action.</li>
+  <li>First and second terms of Eq. (8) are the metrics for efficiency of action.</li>
+  <li>ViewRatio is the number of motion planning points in the person’s visual field out of total number of motion planning point.
+<img src="./images2/eq8.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiments">6. Experiments</h1>
+<ul>
+  <li>We present the results of fundamental experiments described below using an actual robot and the proposed ROS–TMS.
+    <ol>
+      <li>Experiment to detect changes in the environment</li>
+      <li>Experiment to examine gripping and delivery of goods</li>
+      <li>Simulation of robot motion planning</li>
+      <li>Service experiments</li>
+      <li>Verification of modularity and scalability</li>
+    </ol>
+  </li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-detect-changes-in-the-environment">6.1. Experiment to detect changes in the environment</h1>
+<ul>
+  <li>We conducted experiments to detect changes using ODS (Section  4.3) with various pieces of furniture.</li>
+  <li>We consider six pieces of target furniture, including two tables, two shelves, one chair, and one bed.</li>
+  <li>For each piece of furniture, we prepared 10 sets of previously stored data and newly acquired data of kinds of goods including books, snacks, cups, etc., and performed point change detection separately for each set.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-detect-changes-in-the-environment-1">6.1. Experiment to detect changes in the environment</h1>
+<ul>
+  <li>As the evaluation method, we considered the ratio of change detection with respect to the number of objects that were changed (change detection ratio).</li>
+  <li>We also considered over-detection, which occurs when the system detects a change that has actually not occurred.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-detect-changes-in-the-environment-2">6.1. Experiment to detect changes in the environment</h1>
+<ul>
+  <li>The change detection ratios for each furniture type are as follows: 93.3% for tables, 93.4% for shelves, 84.6% for chairs, and 91.3% for beds.
+<img src="./images2/table3.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-detect-changes-in-the-environment-3">6.1. Experiment to detect changes in the environment</h1>
+<ul>
+  <li>The sections enclosed by circles in each image represent points that actually underwent changes.
+<img src="./images2/fig23.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-examine-gripping-and-delivery-of-goods">6.2. Experiment to examine gripping and delivery of goods</h1>
+<ul>
+  <li>We performed an operation experiment in which a robot grasps an object located on a wagon and delivers the object to a person.</li>
+  <li>As a prerequisite for this service, the goods are assumed to have been placed on the wagon, and their positions are known in advance.</li>
+  <li>After performing the experiment 10 times, the robot successfully grabbed and delivered the object in all cases.
+<img src="./images2/fig24.png" alt="opt" width="100%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-examine-gripping-and-delivery-of-goods-1">6.2. Experiment to examine gripping and delivery of goods</h1>
+<ul>
+  <li>We measured the displacement of the position of goods (Ox or Oy in Fig. 25) and the linear distance (d) between the measured value and the true value at the time of delivery,to verify the effect of rotation errors or arm posture errors.</li>
+</ul>
+
+<p><img src="./images2/fig25.png" alt="opt" width="50%" />
+<img src="./images2/table4.png" alt="right" width="90%" /></p>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="experiment-to-examine-gripping-and-delivery-of-goods-2">6.2. Experiment to examine gripping and delivery of goods</h1>
+<ul>
+  <li>The distance error of the position of the goods at the time of delivery was 35.8 mm.</li>
+  <li>According to the manipulability degree, it is possible to cope with these errors, because the system plans a delivery posture with some extra margin in which persons and robots can move their hands.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="simulation-of-robot-motion-planning">6.3. Simulation of robot motion planning</h1>
+<ul>
+  <li>We set up one initial position for the robot (Rx,Ry,Rθ)=(1000mm,1000mm, 0°) , the wagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°) , and the target person  (Hx,Hy,Hθ)=(1400mm,2500mm, -90°) and assume the person is in a sitting state.</li>
+  <li>the range of vision of this person is shown in Fig. 26b by the red area.
+<img src="./images2/fig26.png" alt="opt" width="90%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="simulation-of-robot-motion-planning-1">6.3. Simulation of robot motion planning</h1>
+<ul>
+  <li>The action planning result that passes over wagon grip candidate 1
+<img src="./images2/fig27.png" alt="opt" width="90%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="simulation-of-robot-motion-planning-2">6.3. Simulation of robot motion planning</h1>
+<ul>
+  <li>The action planning result that passes over wagon grip candidate 2
+<img src="./images2/fig28.png" alt="opt" width="90%" /></li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="simulation-of-robot-motion-planning-3">6.3. Simulation of robot motion planning</h1>
+<ul>
+  <li>Furthermore, the evaluation values that changed the weight of each evaluation for each planning result are listed in Table 5, Table 6 and Table 7.</li>
+</ul>
+
+<p><img src="./images2/table5.png" alt="right" width="50%" />
+<img src="./images2/table6.png" alt="right" width="50%" />
+<img src="./images2/table7.png" alt="right" width="70%" /></p>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="simulation-of-robot-motion-planning-4">6.3. Simulation of robot motion planning</h1>
+<ul>
+  <li>The actions of Plan 2–3 were the most highly evaluated (Table 5).</li>
+  <li>Fig. 28a and d indicate that all of the actions occur within the field of vision of the person.</li>
+  <li>Since the target person can monitor the robot’s actions at all times, the risk of the robot unexpectedly touching a person is lower, and if the robot misses an action, the situation can be dealt with immediately.</li>
+  <li>The action plan chosen from the above results according to the proposed evaluation values exhibits both efficiency and high safety.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="service-experiments">6.4. Service experiments</h1>
+<p>We performed a service experiment for the carriage of goods, in accordance with the combined results of these planning sequences. The state of the sequence of actions is shown in Fig. 29.
+<img src="./images2/fig29.png" alt="right" width="100%" /></p>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="service-experiments-1">6.4. Service experiments</h1>
+<ul>
+  <li>This service was carried out successfully, avoiding any contact with the environment.</li>
+  <li>The total time for the task execution is 312 sec in case the maximum velocity of SmartPal-V is limited to 10 mm/sec in terms of safety.</li>
+  <li>The robot position was confirmed to always be within the range of vision of the subject during execution.</li>
+  <li>Accordingly, we can say that the planned actions had an appropriate level of safety.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="service-experiments-2">6.4. Service experiments</h1>
+<ul>
+  <li>There was a margin for the movement of hands, as shown in Fig. 29f, for which the delivery process could appropriately cope with the movement errors of the robot.</li>
+  <li>In reality, the maximum error from desired trajectory was about 0.092 m in the experiments.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="verification-of-modularity-and-scalability">6.5. Verification of modularity and scalability</h1>
+<ul>
+  <li>We built the ROS–TMS for three types of rooms to verify its high modularity and scalability.</li>
+  <li>Thanks to high flexibility and scalability of the ROS–TMS, we could set up these various environments in a comparatively short time.</li>
+</ul>
+
+<p><img src="./images2/fig30.png" alt="right" width="100%" />
+<img src="./images2/fig31.png" alt="right" width="100%" /></p>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="conclusions">7. Conclusions</h1>
+<ul>
+  <li>In the present paper, we have introduced a service robot system with an informationally structured environment named ROS–TMS that is designed to support daily activities of elderly individuals.</li>
+  <li>The room considered herein contains several sensors to monitor the environment and a person.</li>
+  <li>The person is assisted by a humanoid robot that uses information about the environment to support various activities.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="conclusions-1">7. Conclusions</h1>
+<ul>
+  <li>In the present study, we concentrated on detection and fetch-and-give tasks, which we believe will be among most commonly requested tasks by the elderly in their daily lives.</li>
+  <li>We have presented the various subsystems that are necessary for completing this task and have conducted several independent short-term experiments to demonstrate the suitability of these subsystems, such as a detection task using a sensing system and a fetch-and-give task using a robot motion planning system of the ROS–TMS.</li>
+</ul>
+
+
+</div>
+<div class='slide '>
+<!-- _S9SLIDE_ -->
+<h1 id="conclusions-2">7. Conclusions</h1>
+<ul>
+  <li>Currently, we adopt a deterministic approach for choosing proper data from redundant sensory information based on the reliability pre-defined manually.</li>
+  <li>Our future work will include the extension to the probabilistic approach for fusing redundant sensory information.</li>
+  <li>Also, we intend to design and prepare a long-term experiment in which we can test the complete system for a longer period of time</li>
+</ul>
 <!-- === end markdown block === -->
 </div>
 
--- a/slide.md	Fri Jun 03 05:16:27 2016 +0900
+++ b/slide.md	Fri Jun 03 10:06:35 2016 +0900
@@ -99,7 +99,7 @@
 
 # 2. Related research
 - Ubiquitous robotics involves the design and deployment of robots in smart network environments in which everything is interconnected
-- define three types of Ubibots 
+- define three types of Ubibots
     - software robots (Sobots)
     - embedded robots (Embots)
     - mobile robots (Mobots)
@@ -110,7 +110,7 @@
 - Mobots are designed to provide services and explicitly have the ability to manipulate u-space using robotic arms
 - Sobot is a virtual robot that has the ability to move to any location through a network and to communicate with humans
 - The present authors have previously demonstrated the concept of a PIES using Ubibots in a simulated environment and u-space
- 
+
 # 2. Related research
 - RoboEarth is essentially a World Wide Web for robots, namely, a giant network and database repository in which robots can share information and learn from each other about their behavior and their environment
 - the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots
@@ -127,7 +127,7 @@
 - the problem of handing over an object between a human and a robot has been studied in HumanRobot Interaction (HRI)
 
 # 2. Related research
-- the work that is closest to ours is the one by Dehais et al 
+- the work that is closest to ours is the one by Dehais et al
 - in their study, physiological and subjective evaluation for a handing over task was presented
 - the performance of hand-over tasks were evaluated according to three criteria: legibility, safety and physical comfort
 - these criteria are represented as fields of cost functions mapped around the human to generate ergonomic hand-over motions
@@ -276,7 +276,7 @@
 - the steps of the change detection process are as follows
     1. Identification of furniture
     2. Alignment of the furniture model
-    3. Object extraction by furniture removal 
+    3. Object extraction by furniture removal
     4. Segmentation of objects
     5. Comparison with the stored information
 
@@ -363,3 +363,268 @@
 <div style="text-align: center;">
     <img src="./images/fig12.svg" alt="message" width="600">
 </div>
+
+# 5. Robot motion planning
+* Robot motion planning (TMS_RP) is the component of the ROS–TMS that calculates the movement path of the robot and the trajectories of the robot arm for moving, giving, and avoiding obstacles based on information acquired from TMS_SS
+* We consider the necessary planning to implement services such as fetch-and-give tasks because such tasks are among the most frequent tasks required by elderly individuals in daily life.
+
+# 5. Robot motion planning
+* Robot motion planning includes wagons for services that can carry and deliver a large amount of objects, for example, at tea time or handing out towels to residents in elderly care facilities as shown in Fig. 14a  
+![opt](./images2/fig14.png){:width="100%"}
+
+# 5. Robot motion planning
+* Robot motion planning consists of sub-planning, integration, and evaluation of the planning described below to implement the fetch-and-give task.  
+    1. Grasp planning to grip a wagon  
+    2. Position planning for goods delivery  
+    3. Movement path planning  
+    4. Path planning for wagons  
+    5. Integration of planning  
+    6. Evaluation of efficiency and safety  
+* Each planning, integration, and evaluation process uses environment data obtained from TMS_DB and TMS_SS.
+
+# 5.1. Grasp planning to grip a wagon
+* In order for a robot to push a wagon, the robot needs to grasp the wagon at first.
+*  a robot can push a wagon in a stable manner if the robot grasps the wagon from two poles positioned on its sides.
+* Thus, the number of base position options for the robot with respect to the wagon is reduced to four (i) as shown in Fig. 14.
+![opt](./images2/fig14.png){:width="100%"}
+
+# 5.1. Grasp planning to grip a wagon
+* The position and orientation of the wagon, as well as its size, is managed using the ROS–TMS database. Using this information, it is possible to determine the correct relative position.
+* Based on the wagon direction when the robot is grasping its long side, valid candidate points can be determined using Eqs.
+
+# 5.1. Grasp planning to grip a wagon
+* Eq. (2) through (4) below (i=0,1,2,3). Here, R represents the robot, and W represents the wagon. Subscripts x, y, and θ represent the corresponding x-coordinate, y-coordinate, and posture (rotation of the z-axis).
+![opt](./images2/eq234.png){:width="100%"}
+
+# 5.1. Grasp planning to grip a wagon
+* Fig. 13 shows the positional relationship between the robot and the wagon, given i=2.
+![opt](./images2/fig13.png){:width="90%"}
+
+# 5.2. Position planning for goods delivery
+* In order to hand over goods to a person, it is necessary to plan both the position of the goods to be delivered and the base position of the robot according to the person’s position.
+* Using manipulability as an indicator for this planning, the system plans the position of the goods relative to the base position.
+* Manipulability is represented by the degree to which hands/fingers can move when each joint angle is changed.
+
+# 5.2. Position planning for goods delivery
+* When trying to deliver goods in postures with high manipulability, it is easier to modify the motion, even when small gaps exist between the robot and the person.
+* We assume the high manipulability of the arm of the person makes him more comfortable for grasping goods. Their relation is represented in Eqs. (5) and (6).
+* The velocity vector V corresponds to the position of hands, and  Q is the joint angle vector.
+![opt](./images2/eq56.png){:width="100%"}
+
+# 5.2. Position planning for goods delivery
+* If the arm has a redundant degree of freedom, an infinite number of joint angle vectors corresponds to just one hand position.
+* When solving this issue, we calculate the posture that represents the highest manipulability within the range of possible joint angle movements.
+
+# 5.2. Position planning for goods delivery
+* The planning procedure for the position of goods and the position of robots using manipulability is as follows:
+1. The system maps the manipulability that corresponds to the robots and each person on the local coordinate system.
+2. Both manipulability maps are integrated, and the position of goods is determined.
+3. Based on the position of goods, the base position of the robot is determined.
+* We set the robot as the origin of the robot coordinate system, assuming the frontal direction as the x-axis and the lateral direction as the y-axis.
+
+# 5.2. Position planning for goods delivery
+* This mapping is superimposed along the z-axis, which is the height direction, as shown in Fig. 15b.
+![opt](./images2/fig15.png){:width="80%"}
+
+# 5.2. Position planning for goods delivery
+* The next step is to determine, using the manipulability map, the position of the goods that are about to be delivered.
+* As shown in Fig. 16a, we take the maximum manipulability value according to each height, and retain the XY coordinates of each local coordinate system.
+* These coordinates represent the relationship between the base position and the positions of the hands.
+![opt](./images2/fig16.png){:width="80%"}
+
+# 5.2. Position planning for goods delivery
+* According to the calculated height on the manipulability map for a person, the system requests the absolute coordinates of the goods to be delivered, using the previously retained relative coordinates of the hands.
+* The position of the person that will receive the delivered goods is managed through TMS_SS and TMS_DB, and it is also possible to use this position as a reference point to request the position of the goods by fitting the relative coordinates.
+* According to the aforementioned procedure, we can determine the unique position of the goods that are about to be delivered.
+
+
+# 5.2. Position planning for goods delivery
+* As the final step, the base position of the robot is determined in order to hold out the goods to their previously calculated position.
+* According to the manipulability map that corresponds to the height of a specific object, the system retrieves the relationship between the positions of hands and the base position.
+* Using the position of the object as a reference point, the robot is able to hold the object out to any determined position if the base position meets the criteria of this relationship.
+
+# 5.2. Position planning for goods delivery
+* Consequently, at the time of delivery, points on the circumference of the position of the object are determined to be candidate points on the absolute coordinate system of the base position.
+* Considering all of the prospect points of the circumference, the following action planning, for which the system extracts multiple candidate points, is redundant.
+* The best approach is to split the circumference n time, fetch a representative point out of each sector after the split, and limit the number of candidate points.
+
+# 5.2. Position planning for goods delivery
+* After that, the obtained representative points are evaluated as in Eq. (7), while placing special emphasis on safety.
+* Here, View is a Boolean value that represents whether the robot enters the field of vision of the target person. If it is inside the field of vision, then View is 1, otherwise View is 0.
+* This calculation is necessary because if the robot can enter the field of vision of the target person, then the robot can be operated more easily and the risk of unexpected contact with the robot is also reduced.
+* Dhuman represents the distance to the target person, and Dobs represents the distance to the nearest obstacle.
+![opt](./images2/eq7.png){:width="80%"}
+
+# 5.2. Position planning for goods delivery
+* In order to reduce the risk of contact with the target person or an obstacle, the positions that repre
+* If all the candidate points on a given circumference sector result in contact with an obstacle, then the representative points of that sector are not selected.
+* According to the aforementioned process, the base position of the robot is planned based on the position of the requested goods.
+
+
+
+
+# 5.3. Movement path planning - Path planning for robots
+* Path planning for robots that serve in a general living environment requires a high degree of safety, which can be achieved by lowering the probability of contact with persons.
+* However, for robots that push wagons, the parameter space that uniquely defines this state has a maximum of six dimensions, that is, position (x,y) and posture (θ) of a robot and a wagon, and planning a path that represents the highest safety values in such a space is time consuming.
+
+
+# 5.3. Movement path planning - Path planning for robots
+* Thus, we require a method that produces a trajectory with a high degree of safety, but at the same time requires a short processing time. As such, we use a Voronoi map, as shown in Fig. 18.
+![opt](./images2/fig18.png){:width="50%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* In order to be able to plan for wagons in real time, we need to reduce the dimensions of the path search space.
+* The parameters that uniquely describe the state of a wagon pushing robot can have a maximum of six dimensions, but in reality the range in which the robot can operate the wagon is more limited.
+
+# 5.3. Movement path planning - Path planning for wagons
+* We set up a control point, as shown in Fig. 19, which fixes the relative positional relationship of the robot with the control point.
+![opt](./images2/fig19.png){:width="90%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* The operation of the robot is assumed to change in terms of the relative orientation (Wθ) of the wagon with respect to the robot.
+* The range of relative positions is also limited.
+* Accordingly, wagon-pushing robots are presented in just four dimensions, which shortens the search time for the wagon path planning.
+
+# 5.3. Movement path planning - Path planning for wagons
+* Path planning for wagon-pushing robots uses the above-mentioned basic path and is executed as follows:
+1. The start and end points are established.
+2. The path for each robot along the basic path is planned.
+3. According to each point on the path estimated in step 2, the position of the wagon control point is determined considering the manner in which the position of the wagon control point fits the relationship with the robot position.
+4.
+# 5.3. Movement path planning - Path planning for wagons
+4. If the wagon control point is not on the basic path (Fig. 20a), posture (Rθ) of the robot is changed so that the wagon control point passes along the basic path.
+5. If the head of the wagon is not on the basic path (Fig. 20b), the relative posture (Wθ) of the wagon is modified so that it passes along the basic path.
+6. Steps 3 through 5 are repeated until the end point is reached
+![opt](./images2/fig20.png){:width="50%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* Fig. 21 shows the results of wagon path planning, using example start and end points.
+![opt](./images2/fig21.png){:width="70%"}
+
+# 5.3. Movement path planning - Path planning for wagons
+* Using this procedure we can simplify the space search without sacrificing the safety of the basic path diagram.
+* The actual time required to calculate the path of a single robot was 1.10 (ms).
+* the time including the wagon path planning was 6.41 (ms).
+
+# 5.4. Integration of planning
+* We perform operation planning for overall item-carrying action, which integrates position, path and arm motion planning.
+1. Perform wagon grip position planning in order for the robot to grasp a wagon loaded with goods.
+2. Perform position planning for goods delivery. The results of these work position planning tasks becomes the candidate movement target positions for the path planning of the robot and the wagon.
+3. Perform an action planning that combines the above-mentioned planning tasks, from the initial position of the robot to the path the robot takes until grasping the wagon, and the path the wagon takes until the robot reaches the position at which the robot can deliver the goods.
+
+# 5.4. Integration of planning
+* For example
+if there are four candidate positions for wagon gripping and four candidate positions for goods delivery around the target person,
+then we can plan 16 different actions, as shown in Fig. 22. The various action sequences obtained from this procedure are then evaluated to choose the optimum sequence.
+![opt](./images2/fig22.png){:width="70%"}
+
+# 5.5. Evaluation of efficiency and safety
+* We evaluate each candidate action sequence based on efficiency and safety, as shown in Eq. (8).
+* The α,β,γ are respectively the weight values of Length, Rotation and ViewRatio.
+* The Length and Rotation represent the total distance traveled and total rotation angle
+* The Len-min and Rot-min represent the minimum values of all the candidate action.
+* First and second terms of Eq. (8) are the metrics for efficiency of action.
+* ViewRatio is the number of motion planning points in the person’s visual field out of total number of motion planning point.
+![opt](./images2/eq8.png){:width="100%"}
+
+# 6. Experiments
+* We present the results of fundamental experiments described below using an actual robot and the proposed ROS–TMS.
+1. Experiment to detect changes in the environment
+2. Experiment to examine gripping and delivery of goods
+3. Simulation of robot motion planning
+4. Service experiments
+5. Verification of modularity and scalability
+
+# 6.1. Experiment to detect changes in the environment
+* We conducted experiments to detect changes using ODS (Section  4.3) with various pieces of furniture.
+* We consider six pieces of target furniture, including two tables, two shelves, one chair, and one bed.
+* For each piece of furniture, we prepared 10 sets of previously stored data and newly acquired data of kinds of goods including books, snacks, cups, etc., and performed point change detection separately for each set.
+
+# 6.1. Experiment to detect changes in the environment
+* As the evaluation method, we considered the ratio of change detection with respect to the number of objects that were changed (change detection ratio).
+* We also considered over-detection, which occurs when the system detects a change that has actually not occurred.
+
+# 6.1. Experiment to detect changes in the environment
+* The change detection ratios for each furniture type are as follows: 93.3% for tables, 93.4% for shelves, 84.6% for chairs, and 91.3% for beds.
+![opt](./images2/table3.png){:width="100%"}
+
+# 6.1. Experiment to detect changes in the environment
+* The sections enclosed by circles in each image represent points that actually underwent changes.
+![opt](./images2/fig23.png){:width="100%"}
+
+# 6.2. Experiment to examine gripping and delivery of goods
+* We performed an operation experiment in which a robot grasps an object located on a wagon and delivers the object to a person.
+* As a prerequisite for this service, the goods are assumed to have been placed on the wagon, and their positions are known in advance.
+* After performing the experiment 10 times, the robot successfully grabbed and delivered the object in all cases.
+![opt](./images2/fig24.png){:width="100%"}
+
+# 6.2. Experiment to examine gripping and delivery of goods
+* We measured the displacement of the position of goods (Ox or Oy in Fig. 25) and the linear distance (d) between the measured value and the true value at the time of delivery,to verify the effect of rotation errors or arm posture errors.
+
+![opt](./images2/fig25.png){:width="50%"}
+![right](./images2/table4.png){:width="90%"}
+
+# 6.2. Experiment to examine gripping and delivery of goods
+* The distance error of the position of the goods at the time of delivery was 35.8 mm.
+* According to the manipulability degree, it is possible to cope with these errors, because the system plans a delivery posture with some extra margin in which persons and robots can move their hands.
+
+# 6.3. Simulation of robot motion planning
+* We set up one initial position for the robot (Rx,Ry,Rθ)=(1000mm,1000mm, 0°) , the wagon (Wx,Wy,Wθ)=(3000mm,1000mm, 0°) , and the target person  (Hx,Hy,Hθ)=(1400mm,2500mm, -90°) and assume the person is in a sitting state.
+* the range of vision of this person is shown in Fig. 26b by the red area.
+![opt](./images2/fig26.png){:width="90%"}
+
+# 6.3. Simulation of robot motion planning
+* The action planning result that passes over wagon grip candidate 1
+![opt](./images2/fig27.png){:width="90%"}
+
+# 6.3. Simulation of robot motion planning
+* The action planning result that passes over wagon grip candidate 2
+![opt](./images2/fig28.png){:width="90%"}
+
+# 6.3. Simulation of robot motion planning
+* Furthermore, the evaluation values that changed the weight of each evaluation for each planning result are listed in Table 5, Table 6 and Table 7.
+
+![right](./images2/table5.png){:width="50%"}
+![right](./images2/table6.png){:width="50%"}
+![right](./images2/table7.png){:width="70%"}
+
+# 6.3. Simulation of robot motion planning
+* The actions of Plan 2–3 were the most highly evaluated (Table 5).
+* Fig. 28a and d indicate that all of the actions occur within the field of vision of the person.
+* Since the target person can monitor the robot’s actions at all times, the risk of the robot unexpectedly touching a person is lower, and if the robot misses an action, the situation can be dealt with immediately.
+* The action plan chosen from the above results according to the proposed evaluation values exhibits both efficiency and high safety.
+
+# 6.4. Service experiments
+We performed a service experiment for the carriage of goods, in accordance with the combined results of these planning sequences. The state of the sequence of actions is shown in Fig. 29.
+![right](./images2/fig29.png){:width="100%"}
+
+# 6.4. Service experiments
+* This service was carried out successfully, avoiding any contact with the environment.
+* The total time for the task execution is 312 sec in case the maximum velocity of SmartPal-V is limited to 10 mm/sec in terms of safety.
+* The robot position was confirmed to always be within the range of vision of the subject during execution.
+* Accordingly, we can say that the planned actions had an appropriate level of safety.
+
+# 6.4. Service experiments
+*  There was a margin for the movement of hands, as shown in Fig. 29f, for which the delivery process could appropriately cope with the movement errors of the robot.
+* In reality, the maximum error from desired trajectory was about 0.092 m in the experiments.
+
+# 6.5. Verification of modularity and scalability
+* We built the ROS–TMS for three types of rooms to verify its high modularity and scalability.
+* Thanks to high flexibility and scalability of the ROS–TMS, we could set up these various environments in a comparatively short time.
+
+![right](./images2/fig30.png){:width="100%"}
+![right](./images2/fig31.png){:width="100%"}
+
+# 7. Conclusions
+* In the present paper, we have introduced a service robot system with an informationally structured environment named ROS–TMS that is designed to support daily activities of elderly individuals.
+*  The room considered herein contains several sensors to monitor the environment and a person.
+* The person is assisted by a humanoid robot that uses information about the environment to support various activities.
+
+# 7. Conclusions
+* In the present study, we concentrated on detection and fetch-and-give tasks, which we believe will be among most commonly requested tasks by the elderly in their daily lives.
+* We have presented the various subsystems that are necessary for completing this task and have conducted several independent short-term experiments to demonstrate the suitability of these subsystems, such as a detection task using a sensing system and a fetch-and-give task using a robot motion planning system of the ROS–TMS.
+
+# 7. Conclusions
+* Currently, we adopt a deterministic approach for choosing proper data from redundant sensory information based on the reliability pre-defined manually.
+* Our future work will include the extension to the probabilistic approach for fusing redundant sensory information.
+* Also, we intend to design and prepare a long-term experiment in which we can test the complete system for a longer period of time