Software Defined Storage

Ceph Storage 기본 성능 테스트 방법 (Benchmark)

DaehanCNI 2024. 4. 25. 09:00

 Ceph storage에 대한 기본적인 성능 테스트에 대해서 확인해 보도록 합니다. Ceph storage는 rados becnh 라는  벤치마크 툴을 기본적으로 제공하고 있습니다. 읽기와 쓰기에 대한 성능 테스트 방법에 대하여 소개하도록 하겠습니다. 

 

모든 파일 시스템 cache 삭제

[ceph: root@cnode1 /]# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3

 

테스트 pool 생성 (testbench)

[ceph: root@cnode1 /]# ceph osd pool create testbench 100 100
pool 'testbench' created

 

dashboard 에서 testbench pool 생성 확인이 가능하다. 

 

1. pool 쓰기 테스트 (10s)

[ceph: root@cnode1 /]#rados bench -p testbench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_cnode1_107
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        16         0         0         0           -           0
    2      16        16         0         0         0           -           0
    3      16        16         0         0         0           -           0
    4      16        19         3   2.99931         3     3.85507     3.59382
    5      16        22         6   4.79894        12     4.38239     3.97441
    6      16        31        15   9.97974        36     5.96971     4.53939
    7      16        34        18   10.2676        12     6.93254     4.86075
    8      16        35        19   9.48514         4     3.39464     4.78358
    9      16        35        19   8.43248         0           -     4.78358
   10      16        40        24   9.58729        10     4.51099      4.7434
   11       9        40        31    11.259        28     4.99238     4.72767
   12       4        40        36   11.9864        20     2.13649     4.66039
   13       1        40        39   11.9872        12     3.48414     4.52733
   14       1        40        39   11.1316         0           -     4.52733
   15       1        40        39     10.39         0           -     4.52733
Total time run:         15.196
Total writes made:      40
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     10.5291
Stddev Bandwidth:       11.2939
Max bandwidth (MB/sec): 36
Min bandwidth (MB/sec): 0
Average IOPS:           2
Stddev IOPS:            2.85857
Max IOPS:               9
Min IOPS:               0
Average Latency(s):     4.63274
Stddev Latency(s):      1.47399
Max latency(s):         8.74351
Min latency(s):         2.08366

 

2 pool 순차 읽기 테스트 (10s)

[ceph: root@cnode1 /]#rados bench -p testbench 10 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        31        15   59.9663        60    0.347971    0.532335
    2       3        40        37   73.9411        88    0.640055     0.66216
Total time run:       2.12581
Total reads made:     40
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   75.2655
Average IOPS:         18
Stddev IOPS:          4.94975
Max IOPS:             22
Min IOPS:             15
Average Latency(s):   0.700011
Max latency(s):       1.53253
Min latency(s):       0.0746353

 

3. pool 랜덤 읽기 테스트 (10s)

[ceph: root@cnode1 /]# rados bench -p testbench 10 rand
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        89        73   291.816       292    0.230631    0.187521
    2      15       168       153   305.124       320    0.666573    0.186466
    3      16       246       230    306.06       308  0.00841697    0.188437
    4      15       350       335   334.477       420    0.367942    0.175705
    5      16       462       446   356.336       444    0.216682    0.159297
    6      15       535       520   346.175       296   0.0263334    0.157869
    7      16       628       612   349.252       368   0.0550834    0.146466
    8      16       723       707   353.082       380   0.0157252    0.133658
    9      16       798       782   347.179       300   0.0061054    0.124411
   10      15       824       809   323.278       108   0.0109071    0.122154
   11       5       824       819   297.541        40     3.88233    0.193868
Total time run:       11.0422
Total reads made:     824
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   298.491
Average IOPS:         74
Stddev IOPS:          30.7323
Max IOPS:             111
Min IOPS:             10
Average Latency(s):   0.209482
Max latency(s):       8.47311
Min latency(s):       0.00573436

 

4. 테스트 options

  • -t : 동시 읽기 및 쓰기 위한 쓰레드 개수 (기본값 16개)
  • -b : 생성되는 object 사이즈 (기본값 4MB, 안전 최대치 16MB)
  • --run-name : 생성되는 object 이름. 여러 클라이언트가 동일한 object에 접근하여 발생할 수 있는 I/O 오류 방지.
[ceph: root@cnode1 /]# rados bench -p testbench 10 write -t 4 --run-name client1
hints = 1
Maintaining 4 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_cnode1_190
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1       4         5         1   3.99151         4    0.745337    0.745337
    2       4         9         5   9.98842        16     1.98342     1.32027
    3       4        12         8   10.6577        12     1.04618     1.22846
    4       4        15        11   10.9926        12     1.64823     1.27254
    5       4        18        14   11.1922        12     1.17847     1.24373
    6       4        21        17   11.3262        12     1.26309     1.23097
    7       4        23        19   10.8509         8     1.77703     1.26058
    8       4        27        23   11.4939        16     1.49748     1.29277
    9       4        31        27    11.994        16     1.39836      1.2818
   10       4        33        29   11.5945         8     1.33322     1.27286
Total time run:         10.6255
Total writes made:      33
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     12.423
Stddev Bandwidth:       3.97772
Max bandwidth (MB/sec): 16
Min bandwidth (MB/sec): 4
Average IOPS:           3
Stddev IOPS:            0.994429
Max IOPS:               4
Min IOPS:               1
Average Latency(s):     1.25319
Stddev Latency(s):      0.274884
Max latency(s):         1.98342
Min latency(s):         0.745337
Cleaning up (deleting benchmark objects)
Removed 33 objects
Clean up completed and total clean up time :2.05373

 

5. 테스트 데이터 삭제

[ceph: root@cnode1 /]# rados -p testbench cleanup
Removed 40 objects

 

 

 

Original

https://www.ibm.com/docs/en/storage-ceph/7?topic=benchmark-benchmarking-ceph-performance

 

Benchmarking Ceph performance

Ceph includes the rados bench command to do performance benchmarking on a RADOS storage cluster. The command runs a write test and two types of read tests. The --no-cleanup option is important to use when testing both read and write performance. By default

www.ibm.com