LightDB canopy叢集管理
在本節中,我們將討論如何從Canopy叢集中新增或刪除節點,以及如何處理節點故障。
為了更輕鬆地跨節點移動分片或在故障節點上重新複製分片,Canopy 支援完全線上的分片重平衡。我們將在以下部分中簡要討論分片重平衡器提供的相關功能。Canopy還可以用於SaaS或者分析系統中。
分片
為每個分散式表選擇分片數是在擁有更多分片的靈活性與查詢計劃和跨分片執行的開銷之間取得平衡。如果您決定在分配後更改表的分片數,則可以使用 alter_distributed_table 函式。
Multi-Tenant SaaS 應用場景
最佳選擇因您的資料訪問模式而異。例如,在 Multi-Tenant Database 用例中,我們建議在 32 - 128 個分片之間進行選擇。對於小於 100GB 的較小工作負載,您可以從 32 個分片開始,而對於較大的工作負載,您可以選擇 64 或 128 個。這意味著您可以從 32 臺工作機器擴充套件到 128 臺工作機器。
Real-Time Analytics 應用場景
在 Real-Time Analytics 用例中,分片計數應與工作執行緒上的核心總數相關。為確保最大並行性,您應該在每個節點上建立足夠的分片,以便每個 CPU 核心至少有一個分片。我們通常建議建立大量初始分片,例如當前 CPU 核心數量的 2 倍或 4 倍。方便後續工作節點和 CPU 核心的擴充套件。
但是,請記住,對於每個查詢,Canopy 都會為每個分片開啟一個資料庫連線,並且這些連線是有限的(不能超過作業系統的檔案描述符數量)。在表數量和併發高的系統應該保持分片數量儘可能小,這樣分散式查詢就不必經常等待連線。換句話說,所需的連線數(最大併發查詢數 * 分片數)通常不應超過系統中可能的總連線數(work節點數 * 每個work節點的最大連線數)。
叢集擴充套件
Canopy 基於邏輯分片的架構允許您在不停機的情況下擴充套件叢集。本節介紹如何向 Canopy 叢集新增更多節點以提高查詢效能/可擴充套件性。
新增工作節點
Canopy 將分散式表的所有資料儲存在工作節點上。因此,如果你想透過增加更多的計算能力來擴充套件你的叢集,你可以透過新增一個工人來實現。
要向叢集新增新節點,首先需要在 pg_dist_node 目錄表中新增該節點和埠(執行 LightDB 的節點)。您可以使用 canopy_add_node 來執行此操作。例如:
create database test; --所有機器例項執行-- 下面只在cn執行\c testSELECT * from canopy_add_node('10.20.30.10', 5432);SELECT * from canopy_add_node('10.20.30.11', 5432);SELECT * from canopy_add_node('10.20.30.12', 5432); lightdb@test=# select * from pg_dist_node; nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards --------+---------+-------------+----------+----------+-------------+----------+----------+-------------+----------------+------------------ 5 | 5 | 10.20.30.11 | 5432 | default | t | t | primary | default | t | t 6 | 6 | 10.20.30.12 | 5432 | default | t | t | primary | default | t | t (2 rows)
新節點可用於新分散式表的分片。如果不進行rebalance操作,新新增的工作節點是提高不了整個叢集的效能的。
如果您的叢集有非常大的引用表(reference table),它們會減慢節點的新增速度。在這種情況下,請考慮 canopy.replicate_reference_tables_on_activate (boolean) GUC引數。
Canopy 工作節點預設使用加密通訊。新節點將拒絕與未啟用 SSL 的其他工作節點通訊。在沒有加密通訊的情況下將節點新增到叢集時,您必須在建立 Canopy 擴充套件之前重新配置新節點。
First, from the coordinator node check whether the other workers use SSL:
首先,在協調節點檢查其他工作節點是否使用 SSL:
lightdb@test=# SELECT run_command_on_workers('show ssl'); run_command_on_workers -------------------------- (10.20.30.11,5432,t,off) (10.20.30.12,5432,t,off) (2 rows)
If they do not, then connect to the new node and permit it to communicate over plaintext if necessary:
如果值為off,則連線到新節點並允許它在必要時透過明文進行通訊:
ALTER SYSTEM SET canopy.node_conninfo TO 'sslmode=prefer';SELECT pg_reload_conf();
非停機的rebalance 操作
Canopy 支援線上重平衡,如果你想將現有的分片移動到新新增的 worker節點,Canopy 提供了一個 rebalance_table_shards 函式實現。此函式將移動指定表的分片以將它們平均分配到worker節點。
該功能可配置為根據多種策略重新平衡分片,以最好地匹配您的資料庫工作負載。請參閱功能參考以瞭解選擇哪種策略。這是使用預設策略重新平衡分片的示例:
SELECT rebalance_table_shards();
許多產品,如多租戶 SaaS 應用程式,不能容忍停機時間,而在我們的託管服務上,重新平衡能夠滿足 Lightdb 上的這一要求。這意味著在移動資料時,應用程式的讀寫可以繼續進行,業務最少中斷化。
工作原理
Canopy 的分片重新平衡使用 Lightdb 邏輯複製將資料從舊分片(在複製術語中稱為“釋出者”)移動到新分片(“訂閱者”)。邏輯複製允許應用程式讀取和寫入在複製分片資料時繼續不間斷。 Canopy 僅在更新後設資料以將訂閱者分片提升為活動狀態時,才會在分片上放置一個簡短的寫鎖。
正如 Lightdb 文件所解釋的,源需要配置副本身份:
A published table must have a “replica identity” configured in order to be able to replicate UPDATE and DELETE operations, so that appropriate rows to update or delete can be identified on the subscriber side. By default, this is the primary key, if there is one. Another unique index (with certain additional requirements) can also be set to be the replica identity.
換句話說,如果您的分散式表定義了主鍵,那麼它就可以進行分片重新平衡,無需額外的工作。但是,如果它沒有主鍵或明確定義的副本身份,則嘗試重新平衡它會導致錯誤。例如:
-- creating the following table without REPLICA IDENTITY or PRIMARY KEYCREATE TABLE test_table (key int not null, value text not null);SELECT create_distributed_table('test_table', 'key');-- add a new worker node to simulate need for-- shard rebalancing-- running shard rebalancer with default behaviorSELECT rebalance_table_shards('test_table');/* NOTICE: Moving shard 102040 from localhost:9701 to localhost:9700 ... ERROR: cannot use logical replication to transfer shards of the relation test_table since it doesn't have a REPLICA IDENTITY or PRIMARY KEY DETAIL: UPDATE and DELETE commands on the shard will error out during logical replication unless there is a REPLICA IDENTIY or PRIMARY KEY. HINT: If you wish to continue without a replica identity set the shard_transfer_mode to 'force_logical' or 'block_writes'. */
下面給出解決辦法
首先, 看錶中是否有唯一約束
如果要複製的表已經有一個包含分佈列的唯一索引,則選擇該索引作為副本標識:
-- supposing my_table has unique index my_table_idx-- which includes distribution columnALTER TABLE my_table REPLICA IDENTITY USING INDEX my_table_idx;
雖然 REPLICA IDENTITY USING INDEX 很好,但我們建議不要將 REPLICA IDENTITY FULL 新增到表中。此設定將導致每次更新/刪除都在訂閱者端執行全表掃描以查詢包含這些行的元組。在我們的測試中,我們發現這會導致比下面的解決方案四更差的效能。
判斷表是否可以新增主鍵
向表中新增一個主鍵。如果想要的key剛好是分片鍵列,那就很簡單了,加個約束就行了。否則,具有非分佈列的主鍵必須是複合的並且也包含分佈列。
唯一約束和主鍵都不能新增的情況
如果分散式表沒有主鍵或副本標識,並且新增一個不明確或不需要,您仍然可以在 LightDB 上強制使用邏輯複製。在只接收讀取和插入(沒有刪除或更新)的表上執行此操作是可以的。包括 rebalance_table_shards 的可選 shard_transfer_mode 引數:
SELECT rebalance_table_shards( 'test_table', shard_transfer_mode => 'force_logical');
在這種情況下,如果應用程式確實在複製期間嘗試更新或刪除,則該請求只會返回一個錯誤。複製完成後,刪除和寫入將再次成為可能。
新增CN節點
Canopy 協調節點僅儲存有關表分片的後設資料,不儲存任何資料。這意味著所有計算都被下推到工作節點,協調器只對工作節點的結果進行最終聚合。因此,協調器不太可能成為讀取效能的瓶頸。此外,很容易透過轉移到更強大的機器來提升協調器。
但是,在協調節點成為效能瓶頸的一些寫入繁重的用例中,使用者可以新增另一個協調器。由於後設資料表很小(通常只有幾 MB),可以將後設資料複製到另一個節點並定期同步。完成後,使用者可以將他們的查詢傳送給任何協調器並擴充套件效能。
Dealing With Node Failures
在本小節中,我們將討論如何在不導致 Canopy 叢集停機的情況下處理節點故障。
Worker Node Failures
Canopy 使用 LightDB 流式複製,允許它容忍工作節點故障。此選項透過將 WAL 記錄連續流式傳輸到備用節點來複制整個工作節點。您可以透過查閱 LightDB 複製文件 自己在本地配置流複製。
Coordinator Node Failures
Canopy 協調節點維護後設資料表以跟蹤所有叢集節點和這些節點上資料庫分片的位置。後設資料表很小(通常只有幾 MB)並且不會經常更改。這意味著如果節點遇到故障,它們可以被複制並快速恢復。關於使用者如何處理協調器故障,有多種選擇。
- 使用LightDB流複製:可以使用LightDB的流複製特性來建立coordinator的熱備。然後,如果主協調器節點出現故障,備用節點可以自動提升為主節點,為您的叢集提供查詢服務。
- 使用備份工具:由於後設資料表很小,使用者可以使用EBS卷,或者LightDB備份工具 LightDB backup tools 來備份後設資料。然後,他們可以輕鬆地將後設資料複製到新節點以恢復操作。
Tenant Isolation租戶隔離
Canopy 現在包括租戶隔離功能!
Canopy 根據行的分佈列的雜湊值將錶行放入工作分片。多個分佈列值通常屬於同一個分片。在 Canopy 多租戶用例中,這意味著租戶通常共享分片。
但是,當租戶的規模差異很大時,共享分片可能會導致資源爭用。對於擁有大量租戶的系統來說,這是一種常見的情況——我們觀察到,隨著租戶數量的增加,租戶資料的大小往往服從 Zipfian 分佈。這意味著有一些非常大的租戶,還有許多較小的租戶。為了改善資源分配並保證租戶QoS,值得將大租戶移動到專用節點。
Canopy 提供了在特定節點上隔離租戶的工具。這發生在兩個階段:1) 將租戶的資料隔離到一個新的專用分片,然後 2) 將分片移動到所需的節點。要了解該過程,有助於準確瞭解資料行是如何分配給分片的。
每個分片都在 Canopy 後設資料中標記了它包含的雜湊值範圍(更多資訊在 pg_dist_shard 的參考資料中)。 Canopy UDF isolate_tenant_to_new_shard(table_name, tenant_id) 透過三個步驟將租戶移動到專用分片中: - 為 table_name 建立一個新分片,其中 (a) 包括其分佈列具有值 tenant_id 的行,並且 (b) 排除所有其他行。
- 將相關行從當前分片移動到新分片。
- 將舊分片一分為二,雜湊範圍鄰接上方和下方的切除。
此外,UDF 採用 CASCADE 選項,該選項不僅隔離 table_name 的租戶行,還隔離與其共存的所有表的租戶行。這是一個例子:
-- This query creates an isolated shard for the given tenant_id and-- returns the new shard id.-- General form:SELECT isolate_tenant_to_new_shard('table_name', tenant_id);-- Specific example:SELECT isolate_tenant_to_new_shard('lineitem', 135);-- If the given table has co-located tables, the query above errors out and-- advises to use the CASCADE optionSELECT isolate_tenant_to_new_shard('lineitem', 135, 'CASCADE'); ┌─────────────────────────────┐ │ isolate_tenant_to_new_shard │ ├─────────────────────────────┤ │ 102240 │ └─────────────────────────────┘
The new shard(s) are created on the same node as the shard(s) from which the tenant was removed. For true hardware isolation they can be moved to a separate node in the Canopy cluster. As mentioned, the isolate_tenant_to_new_shard function returns the newly created shard id, and this id can be used to move the shard:
– find the node currently holding the new shard
SELECT nodename, nodeport FROM canopy_shards WHERE shardid = 102240;-- list the available worker nodes that could hold the shardSELECT * FROM master_get_active_worker_nodes();-- move the shard to your choice of worker-- (it will also move any shards created with the CASCADE option)SELECT canopy_move_shard_placement( 102240, 'source_host', source_port, 'dest_host', dest_port);
Note that canopy_move_shard_placement will also move any shards which are co-located with the specified one, to preserve their co-location.
Viewing Query Statistics
Canopy 現在包含 canopy_stat_statements 檢視!
在管理 Canopy 叢集時,瞭解使用者正在執行哪些查詢、涉及哪些節點以及 Canopy 對每個查詢使用哪種執行方法是很有用的。 Canopy 在名為 canopy_stat_statements 的後設資料檢視中記錄查詢統計資訊,其名稱類似於 LightDB 的 lt_stat_statments。 pg_stat_statements 儲存有關查詢持續時間和 I/O 的資訊,而 canopy_stat_statements 儲存有關 Canopy 執行方法和分片分割槽鍵(如果適用)的資訊。
Canopy 需要安裝 lt_stat_statements 擴充套件以跟蹤查詢統計資訊。此擴充套件將在 LightDB 中預先啟用
讓我們看看這是如何工作的。假設我們有一個名為 foo 的表,它是按其 id 列雜湊分佈的。
-- create and populate distributed tablecreate table foo ( id int );select create_distributed_table('foo', 'id');insert into foo select generate_series(1,100);
我們將再執行兩個查詢,canopy_stat_statements 將顯示 Canopy 如何選擇執行它們。
-- counting all rows executes on all nodes, and sums-- the results on the coordinatorSELECT count(*) FROM foo;-- specifying a row by the distribution column routes-- execution to an individual nodeSELECT * FROM foo WHERE id = 42;
要了解這些查詢是如何執行的,請查詢統計表:
SELECT * FROM canopy_stat_statements; -[ RECORD 1 ]-+----------------------------------------------queryid | -6844578505338488014 userid | 10 dbid | 13340 query | SELECT count(*) FROM foo; executor | adaptive partition_key | calls | 1 -[ RECORD 2 ]-+----------------------------------------------queryid | 185453597994293667 userid | 10 dbid | 13340 query | insert into foo select generate_series($1,$2) executor | insert-selectpartition_key | calls | 1-[ RECORD 3 ]-+----------------------------------------------queryid | 1301170733886649828userid | 10dbid | 13340query | SELECT * FROM foo WHERE id = $1executor | adaptive partition_key | 42calls | 1
我們可以看到 Canopy 最常使用自適應執行器來執行查詢。該執行器將查詢分割成組成查詢以在相關節點上執行,並在協調器節點上組合結果。對於第二個查詢(按分佈列 id = $1 過濾),Canopy 確定它只需要來自一個節點的資料。最後,我們可以看到 insert into foo select… 語句與 insert-select 執行器一起執行,它提供了執行此類查詢的靈活性。
到目前為止,這個檢視中的資訊沒有給我們任何我們無法透過對給定查詢執行 EXPLAIN 命令來了解的資訊。然而,除了獲取有關單個查詢的資訊外,canopy_stat_statements 檢視還允許我們回答諸如“叢集中有多少百分比的查詢是針對單個租戶的?”等問題。
SELECT sum(calls),
partition_key IS NOT NULL AS single_tenant
FROM canopy_stat_statements
GROUP BY 2;
sum | single_tenant
-----±--------------
2 | f
1 | t
In a multi-tenant database, for instance, we would expect the vast majority of queries to be single tenant. Seeing too many multi-tenant queries may indicate that queries do not have the proper filters to match a tenant, and are using unnecessary resources.
We can also find which partition_ids are the most frequent targets. In a multi-tenant application these would be the busiest tenants.
例如,在多租戶資料庫中,我們希望絕大多數查詢都是單租戶的。看到太多多租戶查詢可能表明查詢沒有適當的過濾器來匹配租戶,並且正在使用不必要的資源。
我們還可以找到哪些 partition_ids 是最常見的目標。在多租戶應用程式中,這些將是最繁忙的租戶。
SELECT partition_key, sum(calls) as total_queriesFROM canopy_stat_statementsWHERE coalesce(partition_key, '') <> ''GROUP BY partition_keyORDER BY total_queries descLIMIT 10; ┌───────────────┬───────────────┐ │ partition_key │ total_queries │ ├───────────────┼───────────────┤ │ 42 │ 1 │ └───────────────┴───────────────┘
統計過期時間
lt_stat_statements 檢視限制它跟蹤的語句數及其記錄的持續時間。因為 canopy_stat_statements 跟蹤 lt_stat_statements 中查詢的嚴格子集,所以為兩個檢視選擇相等的限制會導致它們的資料保留不匹配。不匹配的記錄會導致檢視之間的連線出現不可預測的行為。
There are three ways to help synchronize the views, and all three can be used together.
- Have the maintenance daemon periodically sync the canopy and lt stats. The GUC canopy.stat_statements_purge_interval sets time in seconds for the sync. A value of 0 disables periodic syncs.
- Adjust the number of entries in canopy_stat_statements. The canopy.stat_statements_max GUC removes old entries when new ones cross the threshold. The default value is 50K, and the highest allowable value is 10M. Note that each entry costs about 140 bytes in shared memory so set the value wisely.
- Increase lt_stat_statements.max. Its default value is 5000, and could be increased to 10K, 20K or even 50K without much overhead. This is most beneficial when there is more local (i.e. coordinator) query workload.
備註
Changing lt_stat_statements.max or canopy.stat_statements_max requires restarting the LightDB service. Changing canopy.stat_statements_purge_interval, on the other hand, will come into effect with a call to pg_reload_conf().
Resource Conservation
Limiting Long-Running Queries
長時間執行的查詢可能會持有鎖、排隊 WAL,或者只是消耗大量系統資源,因此在生產環境中最好防止它們執行時間過長。您可以在協調器和工作器上設定 statement_timeout 引數以取消執行時間過長的查詢。
-- limit queries to five minutesALTER DATABASE canopy SET statement_timeout TO 300000;SELECT run_command_on_workers($cmd$ ALTER DATABASE canopy SET statement_timeout TO 300000; $cmd$); The timeout is specified in milliseconds. To customize the timeout per query, use SET LOCAL in a transaction:BEGIN;-- this limit applies to just the current transactionSET LOCAL statement_timeout TO 300000;-- ...COMMIT;
Security
Connection Management
備註
The traffic between the different nodes in the cluster is encrypted for NEW installations. This is done by using TLS with self-signed certificates. This means that this does not protect against Man-In-The-Middle attacks. This only protects against passive eavesdropping on the network.
When Canopy nodes communicate with one another they consult a table with connection credentials. This gives the database administrator flexibility to adjust parameters for security and efficiency.
To set non-sensitive libpq connection parameters to be used for all node connections, update the canopy.node_conninfo GUC:
– key=value pairs separated by spaces.
– For example, ssl options:
ALTER SYSTEM SET canopy.node_conninfo =
‘sslrootcert=/path/to/canopy-ca.crt sslcrl=/path/to/canopy-ca.crl sslmode=verify-full’;
There is a whitelist of parameters that the GUC accepts, see the node_conninfo reference for details. The default value for node_conninfo is sslmode=require, which prevents unencrypted communication between nodes.
After changing this setting it is important to reload the lightdb configuration. Even though the changed setting might be visible in all sessions, the setting is only consulted by Canopy when new connections are established. When a reload signal is received, Canopy marks all existing connections to be closed which causes a reconnect after running transactions have been completed.
SELECT pg_reload_conf();
– only superusers can access this table
– add a password for user jdoe
INSERT INTO pg_dist_authinfo
(nodeid, rolename, authinfo)
VALUES
(123, ‘jdoe’, ‘password=abc123’);
After this INSERT, any query needing to connect to node 123 as the user jdoe will use the supplied password. The documentation for pg_dist_authinfo has more info.
– update user jdoe to use certificate authentication
UPDATE pg_dist_authinfo
SET authinfo = ‘sslcert=/path/to/user.crt sslkey=/path/to/user.key’
WHERE nodeid = 123 AND rolename = ‘jdoe’;
This changes the user from using a password to use a certificate and keyfile while connecting to node 123 instead. Make sure the user certificate is signed by a certificate that is trusted by the worker you are connecting to and authentication settings on the worker allow for certificate based authentication. Full documentation on how to use client certificates can be found in the lightdb libpq documentation.
Changing pg_dist_authinfo does not force any existing connection to reconnect.
Setup Certificate Authority signed certificates
This section assumes you have a trusted Certificate Authority that can issue server certificates to you for all nodes in your cluster. It is recommended to work with the security department in your organization to prevent key material from being handled incorrectly. This guide covers only Canopy specific configuration that needs to be applied, not best practices for PKI management.
For all nodes in the cluster you need to get a valid certificate signed by the same Certificate Authority. The following machine specific files are assumed to be available on every machine:
• /path/to/server.key: Server Private Key
• /path/to/server.crt: Server Certificate or Certificate Chain for Server Key, signed by trusted Certificate Authority.
Next to these machine specific files you need these cluster or CA wide files available:
• /path/to/ca.crt: Certificate of the Certificate Authority
• /path/to/ca.crl: Certificate Revocation List of the Certificate Authority
備註
The Certificate Revocation List is likely to change over time. Work with your security department to set up a mechanism to update the revocation list on to all nodes in the cluster in a timely manner. A reload of every node in the cluster is required after the revocation list has been updated.
Once all files are in place on the nodes, the following settings need to be configured in the LightDB configuration file:
# the following settings allow the lightdb server to enable ssl, and # configure the server to present the certificate to clients when # connecting over tls/ssl ssl = on ssl_key_file = '/path/to/server.key' ssl_cert_file = '/path/to/server.crt' # this will tell canopy to verify the certificate of the server it is connecting to canopy.node_conninfo = 'sslmode=verify-full sslrootcert=/path/to/ca.crt sslcrl=/path/to/ca.crl'
After changing, either restart the database or reload the configuration to apply these changes. Also, adjusting canopy.local_hostname (text) may be required for proper functioning with sslmode=verify-full.
Depending on the policy of the Certificate Authority used you might need or want to change sslmode=verify-full in canopy.node_conninfo to sslmode=verify-ca. For the difference between the two settings please consult the official lightdb documentation.
Lastly, to prevent any user from connecting via an un-encrypted connection, changes need to be made to lt_hba.conf. Many LightDB installations will have entries allowing host connections which allow SSL/TLS connections as well as plain TCP connections. By replacing all host entries with hostssl entries, only encrypted connections will be allowed to authenticate to LightDB. For full documentation on these settings take a look at the lt_hba.conf file documentation on the official LightDB documentation.
備註
When a trusted Certificate Authority is not available, one can create their own via a self-signed root certificate. This is non-trivial and the developer or operator should seek guidance from their security team when doing so.
To verify the connections from the coordinator to the workers are encrypted you can run the following query. It will show the SSL/TLS version used to encrypt the connection that the coordinator uses to talk to the worker:
SELECT run_command_on_workers($$ SELECT version FROM pg_stat_ssl WHERE pid = pg_backend_pid() $$); ┌────────────────────────────┐ │ run_command_on_workers │ ├────────────────────────────┤ │ (localhost,9701,t,TLSv1.2) │ │ (localhost,9702,t,TLSv1.2) │ └────────────────────────────┘ (2 rows)
Increasing Worker Security
For your convenience getting started, our multi-node installation instructions direct you to set up the lt_hba.conf on the workers with its authentication method set to “trust” for local network connections. However, you might desire more security.
To require that all connections supply a hashed password, update the LightDB lt_hba.conf on every worker node with something like this:
# Require password access and a ssl/tls connection to nodes in the local # network. The following ranges correspond to 24, 20, and 16-bit blocks # in Private IPv4 address spaces. hostssl all all 10.0.0.0/8 md5 # Require passwords and ssl/tls connections when the host connects to # itself as well. hostssl all all 127.0.0.1/32 md5 hostssl all all ::1/128 md5
The coordinator node needs to know roles’ passwords in order to communicate with the workers. In Canopy the authentication information has to be maintained in a .pgpass file. Edit .pgpass in the lightdb user’s home directory, with a line for each combination of worker address and role:
hostname:port:database:username:password
Sometimes workers need to connect to one another, such as during repartition joins. Thus each worker node requires a copy of the .pgpass file as well.
Row-Level Security
Canopy 現在支援分散式表的行級安全性。
LightDB 行級安全策略以每個使用者為基礎限制哪些行可以透過正常查詢返回或透過資料修改命令插入、更新或刪除。這在多租戶 Canopy 叢集中特別有用,因為它允許單個租戶對資料庫具有完整的 SQL 訪問許可權,同時對其他租戶隱藏每個租戶的資訊。
我們可以透過使用繫結到錶行級安全策略的資料庫角色命名約定來實現租戶資料的分離。我們將按編號順序為每個租戶分配一個資料庫角色:tenant_1、tenant_2 等。租戶將使用這些單獨的角色連線到 Canopy。行級安全策略可以將角色名稱與 tenant_id 分佈列中的值進行比較,以決定是否允許訪問。
以下是如何在由 tenant_id 分發的簡化事件表上應用該方法。首先建立角色tenant_1 和tenant_2。然後以管理員身份執行以下命令:
CREATE TABLE events( tenant_id int, id int, type text);SELECT create_distributed_table('events','tenant_id');INSERT INTO events VALUES (1,1,'foo'), (2,2,'bar');-- assumes that roles tenant_1 and tenant_2 existGRANT select, update, insert, delete ON events TO tenant_1, tenant_2;
就目前而言,任何對該表具有選擇許可權的人都可以看到這兩行。任何一個租戶的使用者都可以檢視和更新另一個租戶的行。我們可以使用行級表安全策略來解決這個問題。
每個策略由兩個子句組成:USING 和 WITH CHECK。當使用者嘗試讀取或寫入行時,資料庫會根據這些子句評估每一行。根據 USING 中指定的表示式檢查現有錶行,同時根據 WITH CHECK 中指定的表示式檢查將透過 INSERT 或 UPDATE 建立的新行。
-- first a policy for the system admin "canopy" userCREATE POLICY admin_all ON events TO canopy -- apply to this role USING (true) -- read any existing row WITH CHECK (true); -- insert or update any row-- next a policy which allows role "tenant_<n>" to-- access rows where tenant_id = <n>CREATE POLICY user_mod ON events USING (current_user = 'tenant_' || tenant_id::text); -- lack of CHECK means same condition as USING-- enforce the policiesALTER TABLE events ENABLE ROW LEVEL SECURITY;
Now roles tenant_1 and tenant_2 get different results for their queries:
Connected as tenant_1:
SELECT * FROM events; ┌───────────┬────┬──────┐ │ tenant_id │ id │ type │ ├───────────┼────┼──────┤ │ 1 │ 1 │ foo │ └───────────┴────┴──────┘ Connected as tenant_2:SELECT * FROM events; ┌───────────┬────┬──────┐ │ tenant_id │ id │ type │ ├───────────┼────┼──────┤ │ 2 │ 2 │ bar │ └───────────┴────┴──────┘INSERT INTO events VALUES (3,3,'surprise');/* ERROR: new row violates row-level security policy for table "events_102055" */
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29990276/viewspace-2935468/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- LightDB canopy 表管理
- LightDB Canopy 常見報錯問題分析(一)
- LightDB Canopy 常見報錯問題分析(二)DETAIL: Distributed relations cannot haveAI
- KunlunDB叢集管理介面
- 管理 ES 叢集:集常見的叢集部署方式
- kubernetes叢集管理命令(三)
- 管理 ES 叢集:叢集與外部間的安全通訊
- 管理 ES 叢集:叢集內部間的安全通訊
- 管理 ES 叢集:如何對叢集進行容量規劃
- Oracle叢集軟體管理-新增和刪除叢集節點Oracle
- 管理 ES 叢集:分片設定及管理
- Redis Manager 叢集管理與節點管理Redis
- 在 Minecraft 中管理 Kubernetes 叢集Raft
- Elasticsearch——叢集管理及文件CRUDElasticsearch
- 管理 ES 叢集:管叢集身份認證與使用者鑑權
- MySQL DB 叢集管理平臺--orchestratorMySql
- redis叢集管理工具HHDBCSRedis
- python管理k8s叢集PythonK8S
- [第十五篇]——Swarm 叢集管理Swarm
- 十 叢集管理常用命令
- 谷歌Kubernets搞叢集管理的方法谷歌
- 利用遠端桌面管理winserver叢集Server
- 多k8s叢集管理K8S
- k8s之叢集管理K8S
- mongos分片叢集管理和運維Go運維
- Oracle叢集軟體管理-OCR和Voting Files管理Oracle
- Zookeeper叢集 + Kafka叢集Kafka
- CNStack 多叢集服務:基於 OCM 打造完善的叢集管理能力
- 通過memberlist庫實現gossip管理叢集以及叢集資料互動Go
- 在Linux中,如何進行叢集管理?Linux
- Elasticsearch系列---生產叢集的索引管理Elasticsearch索引
- 下一個 Kubernetes 前沿:多叢集管理
- 叢集管理工具C3
- Redis服務之叢集節點管理Redis
- 053.叢集管理-Helm部署及使用
- kubernetes叢集管理常用命令一
- 搭建zookeeper叢集(偽叢集)
- zookeeper叢集及kafka叢集搭建Kafka