docker內服務訪問宿主機服務

failymao發表於2021-10-20

1. 場景

使用windows, wsl2 進行日常開發測試工作。 但是wsl2經常會遇到網路問題。比如今天在測試一個專案,核心功能是將postgres 的資料使用開源元件synch 同步到clickhouse 這個工作。


測試所需元件

  1. postgres
  2. kafka
  3. zookeeper
  4. redis
  5. synch容器

最開始測試時,選擇的方案是, 將上述五個服務使用 docker-compose 進行編排, network_modules使用hosts模式, 因為考慮到kafka的監聽安全機制,這種網路模式,無需單獨指定暴露埠。

docker-compose.yaml 檔案如下

version: "3"

services:
  postgres:
    image: failymao/postgres:12.7
    container_name: postgres
    restart: unless-stopped
    privileged: true                                                      # 設定docker-compose env 檔案
    command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
    volumes:
      - ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
      - ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
    environment:
      POSTGRES_PASSWORD: abc123
      POSTGRES_USER: postgres
      POSTGRES_PORT: 15432
      POSTGRES_HOST: 127.0.0.1
    healthcheck:
      test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
      interval: 30s
      timeout: 10s
      retries: 3
    network_mode: "host"

  zookeeper:
    image: failymao/zookeeper:1.4.0
    container_name: zookeeper
    restart: always
    network_mode: "host"

  kafka:
    image: failymao/kafka:1.4.0
    container_name: kafka
    restart: always
    depends_on:
      - zookeeper
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ZOOKEEPER_CONNECT: localhost:2181
      KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
      KAFKA_BROKER_ID: 1
      KAFKA_LOG_RETENTION_HOURS: 24
      KAFKA_LOG_DIRS: /data/kafka-data  #資料掛載
    network_mode: "host"

  producer:
    depends_on:
      - redis
      - kafka
      - zookeeper
    image: long2ice/synch
    container_name: producer
    command: sh -c "
      sleep 30 &&
      synch --alias pg2ch_test produce"
    volumes:
      - ./synch.yaml:/synch/synch.yaml
    network_mode: "host"

  # 一個消費者消費一個資料庫
  consumer:
    tty: true
    depends_on:
      - redis
      - kafka
      - zookeeper
    image: long2ice/synch
    container_name: consumer
    command: sh -c
      "sleep 30 &&
      synch --alias pg2ch_test consume --schema pg2ch_test"
    volumes:
      - ./synch.yaml:/synch/synch.yaml
    network_mode: "host"

  redis:
    hostname: redis
    container_name: redis
    image: redis:latest
    volumes:
      - redis:/data
    network_mode: "host"

volumes:
  redis:
  kafka:
  zookeeper:


測試過程中因為要使用 postgres, wal2json元件,在容器裡單獨安裝元件很麻煩, 嘗試了幾次均已失敗而告終,所以後來選擇了將 postgres 服務安裝在宿主機上, 容器裡面的synch服務 使用宿主機的 ip,port埠。


但是當重新啟動服務後,synch服務一直啟動不起來, 日誌顯示 postgres 無法連線. synch配置檔案如下

core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit,recommend set 20000 when production
  insert_interval: 60 # how many seconds to submit,recommend set 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true

redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO

source_dbs:
  - db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # current support redis and kafka
    host: 127.0.0.1
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:

clickhouse:
  # shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name:  # enable cluster mode when not empty, and hosts must be more than one if enable.
  distributed_suffix: _all # distributed tables suffix, available in cluster

kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch

這種情況很奇怪,首先確認 postgres, 啟動,且監聽埠(此處是5433) 也正常,使用localhost和主機eth0網路卡地址均報錯。


2. 解決

google 答案,參考 stackoverflow 高贊回答,問題解決,原答案如下

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

更多詳情見 源貼

host 模式下 容器內服務訪問宿主機服務

將postgres監聽地址修改如下 host.docker.internal 報錯解決。 檢視宿主機 /etc/hosts 檔案如下


root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1       localhost

10.111.130.24    host.docker.internal

可以看到,宿主機 ip跟域名的對映. 通過訪問域名,解析到宿主機ip, 訪問宿主機服務。

最終啟動 synch 服務配置如下

core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit,recommend set 20000 when production
  insert_interval: 60 # how many seconds to submit,recommend set 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true

redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO

source_dbs:
  - db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # current support redis and kafka
    host: host.docker.internal
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:

clickhouse:
  # shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name:  # enable cluster mode when not empty, and hosts must be more than one if enable.
  distributed_suffix: _all # distributed tables suffix, available in cluster

kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch    host: host.docker.internal
core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit,recommend set 20000 when production
  insert_interval: 60 # how many seconds to submit,recommend set 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true

redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO

source_dbs:
  - db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # current support redis and kafka
    host: 
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:

clickhouse:
  # shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name:  # enable cluster mode when not empty, and hosts must be more than one if enable.
  distributed_suffix: _all # distributed tables suffix, available in cluster

kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch


## 3. 總結 1. 以--networks="host" 模式下啟動容器時,如果想在容器內訪問宿主機上的服務, 將ip修改為`host.docker.internal`

4. 參考

  1. https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

相關文章