# HG changeset patch # User taiki # Date 1421066248 -32400 # Node ID 8cfb3d2a1f14caee9754c5bc120ef815fddb2313 # Parent 60e8fc84b413c84ab9c8714e6bedeec758a3b830 only gfs2 benchmark diff -r 60e8fc84b413 -r 8cfb3d2a1f14 benchmark.txt --- a/benchmark.txt Mon Jan 12 11:47:46 2015 +0900 +++ b/benchmark.txt Mon Jan 12 21:37:28 2015 +0900 @@ -1,6 +1,6 @@ -use FileBench +use FileBench benchmark on Fedora 20 kernel-3.16 -2014 1/1 GFS2 / fileserver / 60 seconds / VM to GFS2 only bldsv09 access / VM image on FC +* 2015 1/1 GFS2 / fileserver / 60 seconds / VM to GFS2 only bldsv09 access / VM image on FC statfile1 16217ops 270ops/s 0.0mb/s 0.2ms/op 2155us/op-cpu [0ms - 428ms] deletefile1 16221ops 270ops/s 0.0mb/s 8.4ms/op 6842us/op-cpu [0ms - 2911ms] @@ -15,7 +15,7 @@ createfile1 16267ops 271ops/s 0.0mb/s 0.9ms/op 2445us/op-cpu [0ms - 428ms] 1093: 127.732: IO Summary: 178655 ops, 2976.451 ops/s, (270/542 r/w), 70.2mb/s, 457us cpu/op, 28.4ms latency -2014 1/1 GFS2 / fileserver / 60 seconds / only bldsv09 access / FC +* 2015 1/1 GFS2 / fileserver / 60 seconds / only bldsv09 access / FC statfile1 19419ops 324ops/s 0.0mb/s 0.4ms/op 1967us/op-cpu [0ms - 230ms] deletefile1 19419ops 324ops/s 0.0mb/s 8.5ms/op 6421us/op-cpu [0ms - 8915ms] @@ -30,7 +30,7 @@ createfile1 19464ops 324ops/s 0.0mb/s 9.7ms/op 13736us/op-cpu [0ms - 7886ms] 22867: 99.839: IO Summary: 213762 ops, 3562.315 ops/s, (324/648 r/w), 84.3mb/s, 1117us cpu/op, 43.5ms latency -2014 1/1 ext4 / fileserver / 60 seconds / only bldsv09 access / SSD +* 2015 1/1 ext4 / fileserver / 60 seconds / only bldsv09 access / SSD statfile1 41482ops 691ops/s 0.0mb/s 0.0ms/op 6404us/op-cpu [0ms - 0ms] deletefile1 41487ops 691ops/s 0.0mb/s 1.2ms/op 7563us/op-cpu [0ms - 1571ms] @@ -45,7 +45,7 @@ createfile1 41529ops 692ops/s 0.0mb/s 0.4ms/op 6989us/op-cpu [0ms - 1572ms] 23036: 80.006: IO Summary: 456556 ops, 7608.630 ops/s, (691/1384 r/w), 181.3mb/s, 1016us cpu/op, 7.4ms latency -2014 1/1 ext4 / fileserver / 60 seconds / docker only bldsv09 / SSD +* 2015 1/1 ext4 / fileserver / 60 seconds / docker only bldsv09 / SSD statfile1 45012ops 750ops/s 0.0mb/s 0.0ms/op 6428us/op-cpu [0ms - 1ms] deletefile1 45015ops 750ops/s 0.0mb/s 0.4ms/op 7567us/op-cpu [0ms - 257ms] @@ -60,7 +60,7 @@ createfile1 45058ops 751ops/s 0.0mb/s 0.3ms/op 7260us/op-cpu [0ms - 258ms] 37: 79.282: IO Summary: 495354 ops, 8255.094 ops/s, (750/1501 r/w), 196.9mb/s, 1033us cpu/op, 5.5ms latency -2014 1/1 GFS2 / fileserver / 60 seconds / docker to only bldsv09 access / FC +* 2015 1/1 GFS2 / fileserver / 60 seconds / docker to only bldsv09 access / FC -v option で /media/fcs へ接続 @@ -77,7 +77,7 @@ createfile1 27439ops 457ops/s 0.0mb/s 6.9ms/op 14198us/op-cpu [0ms - 13186ms] 43: 92.660: IO Summary: 301419 ops, 5023.193 ops/s, (457/913 r/w), 118.9mb/s, 1187us cpu/op, 33.2ms latency -2014 1/1 GFS2 / fileserver / 60 seconds / bldsv09 access / FC +* 2015 1/1 GFS2 / fileserver / 60 seconds / bldsv09 access / FC statfile1 26112ops 435ops/s 0.0mb/s 0.2ms/op 1669us/op-cpu [0ms - 291ms] deletefile1 26111ops 435ops/s 0.0mb/s 4.4ms/op 5851us/op-cpu [0ms - 14536ms] @@ -92,7 +92,7 @@ createfile1 26160ops 436ops/s 0.0mb/s 8.7ms/op 13211us/op-cpu [0ms - 14537ms] 18999: 86.335: IO Summary: 287298 ops, 4787.839 ops/s, (435/870 r/w), 113.5mb/s, 1127us cpu/op, 35.0ms latency -2014 1/6 GFS2 / fileserver / 60 seconds / bldsv10 access / FC +* 2015 1/6 GFS2 / fileserver / 60 seconds / bldsv10 access / FC statfile1 23738ops 396ops/s 0.0mb/s 1.4ms/op 2917us/op-cpu [0ms - 13948ms] deletefile1 23718ops 395ops/s 0.0mb/s 8.4ms/op 6633us/op-cpu [0ms - 13969ms] @@ -108,7 +108,7 @@ 5359: 68.172: IO Summary: 261164 ops, 4352.327 ops/s, (396/791 r/w), 103.2mb/s, 1243us cpu/op, 36.3ms latency -2014 1/6 GFS2 / fileserver / 60 seconds / bldsv10 and bldsv09 access / FC +* 2015 1/6 GFS2 / fileserver / 60 seconds / bldsv10 and bldsv09 access / FC ansible を使った 2 node (bldsv09、bldsv10) から GFS2 上の別ディレクトリへの読み書き 同じディレクトリへは filebench の性質上できなかった @@ -148,7 +148,7 @@ 18603: 65.666: IO Summary: 54158 ops, 902.538 ops/s, (82/164 r/w), 21.1mb/s, 1473us cpu/op, 181.6ms latency -2014 1/12 ZFS / fileserver / 60 seconds / bldsv10 access / SSD +* 2014 1/12 ZFS / fileserver / 60 seconds / bldsv10 access / SSD statfile1 38897ops 648ops/s 0.0mb/s 0.0ms/op 6531us/op-cpu [0ms - 2ms] deletefile1 38902ops 648ops/s 0.0mb/s 2.7ms/op 7105us/op-cpu [0ms - 1203ms] @@ -163,3 +163,28 @@ createfile1 38947ops 649ops/s 0.0mb/s 2.6ms/op 7003us/op-cpu [0ms - 1211ms] 18334: 123.671: IO Summary: 428100 ops, 7134.431 ops/s, (648/1297 r/w), 169.9mb/s, 1221us cpu/op, 8.9ms latency + +* 2014 1/12 GFS2 / fileserver / 60 seconds / bldsv10 access / FC +http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ を参考に、devicemapper が作る pool を FC 上に変更 + +1. Stop the Docker daemon. +2. Wipe out /var/lib/docker. (That should sound familiar, right?) +3. Create the storage directory: mkdir -p /var/lib/docker/devicemapper/devicemapper. +4. Create a data symbolic link in that directory, pointing to the device: ln -s /dev/sdb /var/lib/docker/devicemapper/devicemapper/data. +5. Restart Docker. +6. Check with docker info that the Data Space Total value is correct. + + docker run --privileged -it fedora:20 /bin/bash で docker を実行し filebench を install + +statfile1 41866ops 698ops/s 0.0mb/s 0.0ms/op 6235us/op-cpu [0ms - 1ms] +deletefile1 41869ops 698ops/s 0.0mb/s 0.2ms/op 6971us/op-cpu [0ms - 218ms] +closefile3 41874ops 698ops/s 0.0mb/s 0.0ms/op 6054us/op-cpu [0ms - 2ms] +readfile1 41881ops 698ops/s 91.0mb/s 8.8ms/op 29662us/op-cpu [0ms - 266ms] +openfile2 41886ops 698ops/s 0.0mb/s 0.1ms/op 6587us/op-cpu [0ms - 13ms] +closefile2 41890ops 698ops/s 0.0mb/s 0.0ms/op 6085us/op-cpu [0ms - 2ms] +appendfilerand1 41895ops 698ops/s 5.4mb/s 2.5ms/op 12903us/op-cpu [0ms - 14ms] +openfile1 41900ops 698ops/s 0.0mb/s 0.1ms/op 6640us/op-cpu [0ms - 9ms] +closefile1 41904ops 698ops/s 0.0mb/s 0.0ms/op 6122us/op-cpu [0ms - 2ms] +wrtfile1 41910ops 698ops/s 86.5mb/s 8.5ms/op 28960us/op-cpu [0ms - 272ms] +createfile1 41914ops 699ops/s 0.0mb/s 0.2ms/op 7029us/op-cpu [0ms - 15ms] +35: 76.024: IO Summary: 460789 ops, 7679.141 ops/s, (698/1397 r/w), 182.9mb/s, 1022us cpu/op, 6.8ms latency diff -r 60e8fc84b413 -r 8cfb3d2a1f14 on_gfs2.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/on_gfs2.txt Mon Jan 12 21:37:28 2015 +0900 @@ -0,0 +1,142 @@ + +* 2015 1/1 GFS2 / fileserver / 60 seconds / VM to GFS2 only bldsv09 access / VM image on FC + +statfile1 16217ops 270ops/s 0.0mb/s 0.2ms/op 2155us/op-cpu [0ms - 428ms] +deletefile1 16221ops 270ops/s 0.0mb/s 8.4ms/op 6842us/op-cpu [0ms - 2911ms] +closefile3 16223ops 270ops/s 0.0mb/s 0.0ms/op 2132us/op-cpu [0ms - 19ms] +readfile1 16225ops 270ops/s 34.2mb/s 23.8ms/op 4463us/op-cpu [0ms - 5577ms] +openfile2 16237ops 271ops/s 0.0mb/s 0.3ms/op 2175us/op-cpu [0ms - 428ms] +closefile2 16243ops 271ops/s 0.0mb/s 0.0ms/op 2114us/op-cpu [0ms - 5ms] +appendfilerand1 16245ops 271ops/s 2.1mb/s 49.7ms/op 8875us/op-cpu [0ms - 5643ms] +openfile1 16257ops 271ops/s 0.0mb/s 0.3ms/op 2158us/op-cpu [0ms - 425ms] +closefile1 16258ops 271ops/s 0.0mb/s 0.0ms/op 2133us/op-cpu [0ms - 9ms] +wrtfile1 16262ops 271ops/s 33.9mb/s 1.5ms/op 2783us/op-cpu [0ms - 1806ms] +createfile1 16267ops 271ops/s 0.0mb/s 0.9ms/op 2445us/op-cpu [0ms - 428ms] + 1093: 127.732: IO Summary: 178655 ops, 2976.451 ops/s, (270/542 r/w), 70.2mb/s, 457us cpu/op, 28.4ms latency + +* 2015 1/1 GFS2 / fileserver / 60 seconds / only bldsv09 access / FC + +statfile1 19419ops 324ops/s 0.0mb/s 0.4ms/op 1967us/op-cpu [0ms - 230ms] +deletefile1 19419ops 324ops/s 0.0mb/s 8.5ms/op 6421us/op-cpu [0ms - 8915ms] +closefile3 19429ops 324ops/s 0.0mb/s 0.0ms/op 1396us/op-cpu [0ms - 0ms] +readfile1 19429ops 324ops/s 41.9mb/s 2.9ms/op 6144us/op-cpu [0ms - 8908ms] +openfile2 19431ops 324ops/s 0.0mb/s 1.1ms/op 2121us/op-cpu [0ms - 8906ms] +closefile2 19432ops 324ops/s 0.0mb/s 0.0ms/op 1438us/op-cpu [0ms - 0ms] +appendfilerand1 19432ops 324ops/s 2.5mb/s 8.8ms/op 7417us/op-cpu [0ms - 8912ms] +openfile1 19435ops 324ops/s 0.0mb/s 3.1ms/op 2399us/op-cpu [0ms - 8907ms] +closefile1 19436ops 324ops/s 0.0mb/s 0.0ms/op 1338us/op-cpu [0ms - 0ms] +wrtfile1 19436ops 324ops/s 39.8mb/s 96.0ms/op 120915us/op-cpu [0ms - 9012ms] +createfile1 19464ops 324ops/s 0.0mb/s 9.7ms/op 13736us/op-cpu [0ms - 7886ms] +22867: 99.839: IO Summary: 213762 ops, 3562.315 ops/s, (324/648 r/w), 84.3mb/s, 1117us cpu/op, 43.5ms latency + +* 2015 1/1 GFS2 / fileserver / 60 seconds / docker to only bldsv09 access / FC + + -v option で /media/fcs へ接続 + +statfile1 27399ops 457ops/s 0.0mb/s 0.7ms/op 1869us/op-cpu [0ms - 13179ms] +deletefile1 27381ops 456ops/s 0.0mb/s 3.6ms/op 6066us/op-cpu [0ms - 13194ms] +closefile3 27400ops 457ops/s 0.0mb/s 0.0ms/op 1378us/op-cpu [0ms - 0ms] +readfile1 27400ops 457ops/s 59.2mb/s 1.5ms/op 6065us/op-cpu [0ms - 209ms] +openfile2 27400ops 457ops/s 0.0mb/s 0.3ms/op 2014us/op-cpu [0ms - 99ms] +closefile2 27400ops 457ops/s 0.0mb/s 0.0ms/op 1400us/op-cpu [0ms - 0ms] +appendfilerand1 27400ops 457ops/s 3.6mb/s 4.4ms/op 9155us/op-cpu [0ms - 13193ms] +openfile1 27400ops 457ops/s 0.0mb/s 0.4ms/op 2048us/op-cpu [0ms - 81ms] +closefile1 27400ops 457ops/s 0.0mb/s 0.0ms/op 1341us/op-cpu [0ms - 1ms] +wrtfile1 27400ops 457ops/s 56.1mb/s 81.9ms/op 130988us/op-cpu [0ms - 13500ms] +createfile1 27439ops 457ops/s 0.0mb/s 6.9ms/op 14198us/op-cpu [0ms - 13186ms] + 43: 92.660: IO Summary: 301419 ops, 5023.193 ops/s, (457/913 r/w), 118.9mb/s, 1187us cpu/op, 33.2ms latency + +* 2015 1/1 GFS2 / fileserver / 60 seconds / bldsv09 access / FC + +statfile1 26112ops 435ops/s 0.0mb/s 0.2ms/op 1669us/op-cpu [0ms - 291ms] +deletefile1 26111ops 435ops/s 0.0mb/s 4.4ms/op 5851us/op-cpu [0ms - 14536ms] +closefile3 26114ops 435ops/s 0.0mb/s 0.0ms/op 1187us/op-cpu [0ms - 0ms] +readfile1 26114ops 435ops/s 56.1mb/s 5.0ms/op 4831us/op-cpu [0ms - 14540ms] +openfile2 26114ops 435ops/s 0.0mb/s 1.4ms/op 1825us/op-cpu [0ms - 14518ms] +closefile2 26114ops 435ops/s 0.0mb/s 0.0ms/op 1212us/op-cpu [0ms - 0ms] +appendfilerand1 26114ops 435ops/s 3.4mb/s 7.2ms/op 8338us/op-cpu [0ms - 14540ms] +openfile1 26115ops 435ops/s 0.0mb/s 1.7ms/op 1891us/op-cpu [0ms - 14519ms] +closefile1 26115ops 435ops/s 0.0mb/s 0.0ms/op 1206us/op-cpu [0ms - 0ms] +wrtfile1 26115ops 435ops/s 54.1mb/s 76.3ms/op 126556us/op-cpu [0ms - 14636ms] +createfile1 26160ops 436ops/s 0.0mb/s 8.7ms/op 13211us/op-cpu [0ms - 14537ms] +18999: 86.335: IO Summary: 287298 ops, 4787.839 ops/s, (435/870 r/w), 113.5mb/s, 1127us cpu/op, 35.0ms latency + +* 2015 1/6 GFS2 / fileserver / 60 seconds / bldsv10 access / FC + +statfile1 23738ops 396ops/s 0.0mb/s 1.4ms/op 2917us/op-cpu [0ms - 13948ms] +deletefile1 23718ops 395ops/s 0.0mb/s 8.4ms/op 6633us/op-cpu [0ms - 13969ms] +closefile3 23739ops 396ops/s 0.0mb/s 0.0ms/op 2433us/op-cpu [0ms - 0ms] +readfile1 23739ops 396ops/s 50.9mb/s 5.4ms/op 10531us/op-cpu [0ms - 14041ms] +openfile2 23739ops 396ops/s 0.0mb/s 3.8ms/op 2983us/op-cpu [0ms - 13955ms] +closefile2 23739ops 396ops/s 0.0mb/s 0.0ms/op 2398us/op-cpu [0ms - 0ms] +appendfilerand1 23740ops 396ops/s 3.1mb/s 7.0ms/op 9821us/op-cpu [0ms - 13969ms] +openfile1 23742ops 396ops/s 0.0mb/s 1.1ms/op 3156us/op-cpu [0ms - 13949ms] +closefile1 23742ops 396ops/s 0.0mb/s 0.0ms/op 2358us/op-cpu [0ms - 0ms] +wrtfile1 23743ops 396ops/s 49.2mb/s 56.8ms/op 114702us/op-cpu [0ms - 14031ms] +createfile1 23785ops 396ops/s 0.0mb/s 25.1ms/op 16142us/op-cpu [0ms - 13969ms] + 5359: 68.172: IO Summary: 261164 ops, 4352.327 ops/s, (396/791 r/w), 103.2mb/s, 1243us cpu/op, 36.3ms latency + +* 2015 1/6 GFS2 / fileserver / 60 seconds / bldsv10 and bldsv09 access / FC + +ansible を使った 2 node (bldsv09、bldsv10) から GFS2 上の別ディレクトリへの読み書き +同じディレクトリへは filebench の性質上できなかった + +ansible -s -i hosts all -a 'filebench -f /home/taira/hg/benchmarks/fileserver.f' --sudo --ask-sudo-pass + + bldsv09.cr.ie.u-ryukyu.ac.jp + /media/fcs/bldsv09 + +statfile1 2832ops 47ops/s 0.0mb/s 15.2ms/op 5290us/op-cpu [0ms - 5189ms] +deletefile1 2830ops 47ops/s 0.0mb/s 40.4ms/op 7021us/op-cpu [0ms - 17584ms] +closefile3 2839ops 47ops/s 0.0mb/s 0.0ms/op 1828us/op-cpu [0ms - 0ms] +readfile1 2839ops 47ops/s 6.0mb/s 4.8ms/op 2853us/op-cpu [0ms - 1642ms] +openfile2 2847ops 47ops/s 0.0mb/s 120.9ms/op 7457us/op-cpu [0ms - 19056ms] +closefile2 2852ops 48ops/s 0.0mb/s 0.0ms/op 2002us/op-cpu [0ms - 0ms] +appendfilerand1 2852ops 48ops/s 0.4mb/s 40.8ms/op 14246us/op-cpu [0ms - 9765ms] +openfile1 2866ops 48ops/s 0.0mb/s 239.1ms/op 12955us/op-cpu [0ms - 19058ms] +closefile1 2867ops 48ops/s 0.0mb/s 0.0ms/op 2065us/op-cpu [0ms - 0ms] +wrtfile1 2867ops 48ops/s 5.9mb/s 31.5ms/op 12675us/op-cpu [0ms - 5644ms] +createfile1 2879ops 48ops/s 0.0mb/s 452.5ms/op 76766us/op-cpu [0ms - 28904ms] + 4400: 65.803: IO Summary: 31370 ops, 522.771 ops/s, (47/95 r/w), 12.2mb/s, 1526us cpu/op, 316.6ms latency + + bldsv10.cr.ie.u-ryukyu.ac.jp + /media/fcs/bldsv10 + + statfile1 4912ops 82ops/s 0.0mb/s 11.9ms/op 2079us/op-cpu [0ms - 4671ms] + deletefile1 4891ops 82ops/s 0.0mb/s 71.6ms/op 12556us/op-cpu [0ms - 13723ms] + closefile3 4919ops 82ops/s 0.0mb/s 0.0ms/op 1386us/op-cpu [0ms - 0ms] + readfile1 4919ops 82ops/s 10.3mb/s 3.1ms/op 2007us/op-cpu [0ms - 6064ms] + openfile2 4922ops 82ops/s 0.0mb/s 16.0ms/op 2292us/op-cpu [0ms - 4709ms] + closefile2 4926ops 82ops/s 0.0mb/s 0.0ms/op 1303us/op-cpu [0ms - 0ms] + appendfilerand1 4926ops 82ops/s 0.6mb/s 28.7ms/op 11045us/op-cpu [0ms - 7014ms] + openfile1 4933ops 82ops/s 0.0mb/s 93.4ms/op 2783us/op-cpu [0ms - 15341ms] + closefile1 4936ops 82ops/s 0.0mb/s 0.0ms/op 1264us/op-cpu [0ms - 0ms] + wrtfile1 4936ops 82ops/s 10.2mb/s 99.5ms/op 75415us/op-cpu [0ms - 10896ms] + createfile1 4938ops 82ops/s 0.0mb/s 220.4ms/op 37436us/op-cpu [0ms - 16501ms] + 18603: 65.666: IO Summary: 54158 ops, 902.538 ops/s, (82/164 r/w), 21.1mb/s, 1473us cpu/op, 181.6ms latency + +* 2014 1/12 GFS2 / fileserver / 60 seconds / bldsv10 access / FC + +http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ を参考に、devicemapper が作る pool を FC 上に変更 + +1. Stop the Docker daemon. +2. Wipe out /var/lib/docker. (That should sound familiar, right?) +3. Create the storage directory: mkdir -p /var/lib/docker/devicemapper/devicemapper. +4. Create a data symbolic link in that directory, pointing to the device: ln -s /dev/sdb /var/lib/docker/devicemapper/devicemapper/data. +5. Restart Docker. +6. Check with docker info that the Data Space Total value is correct. + + docker run --privileged -it fedora:20 /bin/bash で docker を実行し filebench を install + +statfile1 41866ops 698ops/s 0.0mb/s 0.0ms/op 6235us/op-cpu [0ms - 1ms] +deletefile1 41869ops 698ops/s 0.0mb/s 0.2ms/op 6971us/op-cpu [0ms - 218ms] +closefile3 41874ops 698ops/s 0.0mb/s 0.0ms/op 6054us/op-cpu [0ms - 2ms] +readfile1 41881ops 698ops/s 91.0mb/s 8.8ms/op 29662us/op-cpu [0ms - 266ms] +openfile2 41886ops 698ops/s 0.0mb/s 0.1ms/op 6587us/op-cpu [0ms - 13ms] +closefile2 41890ops 698ops/s 0.0mb/s 0.0ms/op 6085us/op-cpu [0ms - 2ms] +appendfilerand1 41895ops 698ops/s 5.4mb/s 2.5ms/op 12903us/op-cpu [0ms - 14ms] +openfile1 41900ops 698ops/s 0.0mb/s 0.1ms/op 6640us/op-cpu [0ms - 9ms] +closefile1 41904ops 698ops/s 0.0mb/s 0.0ms/op 6122us/op-cpu [0ms - 2ms] +wrtfile1 41910ops 698ops/s 86.5mb/s 8.5ms/op 28960us/op-cpu [0ms - 272ms] +createfile1 41914ops 699ops/s 0.0mb/s 0.2ms/op 7029us/op-cpu [0ms - 15ms] +35: 76.024: IO Summary: 460789 ops, 7679.141 ops/s, (698/1397 r/w), 182.9mb/s, 1022us cpu/op, 6.8ms latency