Ceph Reef(18.2.X)之壓縮演算法和壓縮模式

尹正杰發表於2024-09-05

                                              作者:尹正傑

版權宣告:原創作品,謝絕轉載!否則將追究法律責任。

目錄
  • 一.ceph壓縮概述
    • 1.ceph壓縮概述
    • 2.啟用壓縮相關命令
  • 二.ceph壓縮案例
    • 1.查預設的壓縮演算法
    • 2.更改壓縮演算法
    • 3.更改演算法模式
    • 4.還原演算法和模式

一.ceph壓縮概述

1.ceph壓縮概述

ceph支援高效傳輸,支援對資料進行壓縮,BluStrore儲存引擎預設提供了即時資料壓縮,以節省磁碟空間。

ceph支援none,zlib(不推薦),lz4(和snappy效能接近),zstd(壓縮比高但耗費CPU)和snappy等常用的壓縮方式,預設為snappy壓縮演算法。
	
ceph支援壓縮模式有none,passive,aggressive,force,預設值為none。
	- none:
	    表示不壓縮。
	- passive:
			若提示COMPRESSIBLE,則壓縮。
	- aggressive:
  		除非提示INCOMPRESSIBLE,否則就壓縮。
  - force:
  		表示始終壓縮。

2.啟用壓縮相關命令

啟用壓縮的相關命令: (如果需要全域性壓縮,最好在配置檔案中指定)
ceph osd pool set <pool_name> compression_algorithm snappy  # 指定壓縮演算法
ceph osd pool set <pool_name> compression_mode aggressive  # 指定壓縮模式


其他可用的壓縮引數:
    compression_required ratio:
    	指定壓縮比,取值格式為雙精度浮點型。
    	其值為"SIZE_COMPRESSED/SIZE_ORIGINAL",即壓縮後的大小與原始內容大小的比例,預設值為"0.875000"。
    
    compression_max_blob_size:
    	壓縮物件的最大體積,無符號整數型數值,預設為0。
    	
    compression_min_blob_size:
    	壓縮物件的最小體積,無符號整數型數值,預設值為0。
    

全域性壓縮選項:
	可在ceph配置檔案中設定壓縮屬性,它將對所有的儲存池生效,可設定相關引數如下:
		bluestore_compression_algorithm
		bluestore_compression_mode
		bluestore_compression_required_ratio
		bluestore_compression_min_blob_size
		bluestore_compression_max_blob_size
		bluestore_compression_min_blob_size_ssd
		bluestore_compression_max_blob_size_ssd
		bluestore_compression_min_blob_size_hdd
		bluestore_compression_max_blob_size_hdd

二.ceph壓縮案例

1.查預設的壓縮演算法

[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config show | grep compression
    "bluestore_compression_algorithm": "snappy",
    "bluestore_compression_max_blob_size": "0",
    "bluestore_compression_max_blob_size_hdd": "65536",
    "bluestore_compression_max_blob_size_ssd": "65536",
    "bluestore_compression_min_blob_size": "0",
    "bluestore_compression_min_blob_size_hdd": "8192",
    "bluestore_compression_min_blob_size_ssd": "65536",
    "bluestore_compression_mode": "none",
    "bluestore_compression_required_ratio": "0.875000",
    "bluestore_rocksdb_options": "compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0",
    "filestore_rocksdb_options": "max_background_jobs=10,compaction_readahead_size=2097152,compression=kNoCompression",
    "kstore_rocksdb_options": "compression=kNoCompression",
    "leveldb_compression": "true",
    "mon_rocksdb_options": "write_buffer_size=33554432,compression=kNoCompression,level_compaction_dynamic_level_bytes=true",
    "ms_osd_compression_algorithm": "snappy",
    "rbd_compression_hint": "none",
[root@ceph141 ~]# 

2.更改壓縮演算法

[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd  # 預設不現實壓縮演算法
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 63 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 1.88
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_algorithm zstd  # 指定壓縮演算法
set pool 2 compression_algorithm to zstd
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd  # 檢視壓縮演算法是否生效
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 344 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd application rbd read_balance_score 1.88
[root@ceph141 ~]# 

3.更改演算法模式

[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 344 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd application rbd read_balance_score 1.88
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_mode aggressive
set pool 2 compression_mode to aggressive
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 345 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd compression_mode aggressive application rbd read_balance_score 1.88
[root@ceph141 ~]# 

4.還原演算法和模式

[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 345 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd compression_mode aggressive application rbd read_balance_score 1.88
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_mode none
set pool 2 compression_mode to none
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_algorithm snappy
set pool 2 compression_algorithm to snappy
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 347 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm snappy compression_mode none application rbd read_balance_score 1.88
[root@ceph141 ~]# 

相關文章