Langchain-Chatchat開源庫使用的隨筆記(一)

有何m不可發表於2024-03-14

轉自:https://zhuanlan.zhihu.com/p/676061269

1 Chatchat專案結構

整個結構是server 啟動API,然後專案內自行呼叫API。

API詳情可見:http://xxx:7861/docs ,整個程式碼架構還是蠻適合深入學習

Langchain-Chatchat開源庫使用的隨筆記(一)

在這裡插入圖片描述


2 Chatchat一些程式碼學習

2.1 12個分塊函式統一使用

截止 20231231 筆者看到chatchat一共有12個分chunk的函式 這12個函式如何使用、大致點評可以參考筆者的另外文章(RAG 分塊Chunk技術優劣、技巧、方法彙總(五)):

CharacterTextSplitter
LatexTextSplitter
MarkdownHeaderTextSplitter
MarkdownTextSplitter
NLTKTextSplitter
PythonCodeTextSplitter
RecursiveCharacterTextSplitter
SentenceTransformersTokenTextSplitter
SpacyTextSplitter

AliTextSplitter
ChineseRecursiveTextSplitter
ChineseTextSplitter

借用chatchat專案中的test/custom_splitter/test_different_splitter.py來看看一起呼叫make_text_splitter函式:

from langchain import document_loaders
from server.knowledge_base.utils import make_text_splitter

# 使用DocumentLoader讀取檔案
filepath = "knowledge_base/samples/content/test_files/test.txt"
loader = document_loaders.UnstructuredFileLoader(filepath, autodetect_encoding=True)
docs = loader.load()

CHUNK_SIZE = 250
OVERLAP_SIZE = 50

splitter_name = 'AliTextSplitter'
text_splitter = make_text_splitter(splitter_name, CHUNK_SIZE, OVERLAP_SIZE)
if splitter_name == "MarkdownHeaderTextSplitter":
    docs = text_splitter.split_text(docs[0].page_content)
    for doc in docs:
        if doc.metadata:
            doc.metadata["source"] = os.path.basename(filepath)
else:
    docs = text_splitter.split_documents(docs)
for doc in docs:
    print(doc)

2.2 知識庫問答Chat的使用

本節參考chatchat開源專案的tests\api\test_stream_chat_api_thread.py 以及 tests\api\test_stream_chat_api.py 來探索一下知識庫問答呼叫,包括:

  • 流式呼叫
  • 單次呼叫
  • 多執行緒併發呼叫

2.2.1 流式呼叫

import requests
import json
import sys

api_base_url = 'http://0.0.0.0:7861'

api="/chat/knowledge_base_chat"
url = f"{api_base_url}{api}"


headers = {
    'accept': 'application/json',
    'Content-Type': 'application/json',
}


data = {
    "query": "如何提問以獲得高質量答案",
    "knowledge_base_name": "ZWY_V2_m3e-large",
    "history": [
        {
            "role": "user",
            "content": "你好"
        },
        {
            "role": "assistant",
            "content": "你好,我是 ChatGLM"
        }
    ],
    "stream": True
}
# dump_input(data, api)
response = requests.post(url, headers=headers, json=data, stream=True)
print("\n")
print("=" * 30 + api + "  output" + "="*30)
for line in response.iter_content(None, decode_unicode=True):
    data = json.loads(line)
    if "answer" in data:
        print(data["answer"], end="", flush=True)
pprint(data)
assert "docs" in data and len(data["docs"]) > 0
assert response.status_code == 200

>>>==============================/chat/knowledge_base_chat  output==============================
 你好!提問以獲得高質量答案,以下是一些建議:

1. 儘可能清晰明確地表達問題:確保你的問題表述清晰、簡潔、明確,以便我能夠準確理解你的問題並給出恰當的回答。
2. 提供足夠的上下文資訊:提供相關的背景資訊和上下文,以便我能夠更好地理解你的問題,並給出更準確的回答。
3. 使用簡潔的語言:儘量使用簡單、明瞭的語言,以便我能夠快速理解你的問題。
4. 避免使用縮寫和俚語:避免使用縮寫和俚語,以便我能夠準確理解你的問題。
5. 分步提問:如果問題比較複雜,可以分步提問,這樣我可以逐步幫助你解決問題。
6. 檢查你的問題:在提問之前,請檢查你的問題是否完整、清晰且準確。
7. 提供反饋:如果你對我的回答不滿意,請提供反饋,以便我改進我的回答。

希望這些建議能幫助你更好地提問,獲得高質量的答案。

結構也比較簡單,call 知識庫問答的URL,然後返回,透過response.iter_content來進行流式反饋。

2.2.2 正常呼叫以及處理併發

import requests
import json
import sys

api_base_url = 'http://0.0.0.0:7861'

api="/chat/knowledge_base_chat"
url = f"{api_base_url}{api}"


headers = {
    'accept': 'application/json',
    'Content-Type': 'application/json',
}


data = {
    "query": "如何提問以獲得高質量答案",
    "knowledge_base_name": "ZWY_V2_m3e-large",
    "history": [
        {
            "role": "user",
            "content": "你好"
        },
        {
            "role": "assistant",
            "content": "你好,我是 ChatGLM"
        }
    ],
    "stream": True
}

# 正常呼叫並儲存結果
result = []
response = requests.post(url, headers=headers, json=data, stream=True)

for line in response.iter_content(None, decode_unicode=True):
    data = json.loads(line)
    result.append(data)

answer = ''.join([r['answer'] for r in result[:-1]]) # 正常的結果
>>> ' 你好,很高興為您提供幫助。以下是一些提問技巧,可以幫助您獲得高質量的答案:\n\n1. 儘可能清晰明確地表達問題:確保您的問題準確、簡潔、明確,以便我可以更好地理解您的問題併為您提供最佳答案。\n2. 提供足夠的上下文資訊:提供相關的背景資訊和上下文,以便我更好地瞭解您的問題,並能夠更準確地回答您的問題。\n3. 使用簡潔的語言:儘量使用簡單、明瞭的語言,以便我能夠更好地理解您的問題。\n4. 避免使用縮寫和俚語:儘量使用標準語言,以確保我能夠正確理解您的問題。\n5. 分步提問:如果您有一個複雜的問題,可以將其拆分成幾個簡單的子問題,這樣我可以更好地回答每個子問題。\n6. 檢查您的拼寫和語法:拼寫錯誤和語法錯誤可能會使我難以理解您的問題,因此請檢查您的提問,以確保它們是正確的。\n7. 指定問題型別:如果您需要特定型別的答案,請告訴我,例如數字、列表或步驟等。\n\n希望這些技巧能幫助您獲得高質量的答案。如果您有其他問題,請隨時問我。'

refer_doc = result[-1] # 參考文獻
>>> {'docs': ["<span style='color:red'>未找到相關文件,該回答為大模型自身能力解答!</span>"]}

然後來看一下併發:

# 併發呼叫
def knowledge_chat(api="/chat/knowledge_base_chat"):
    url = f"{api_base_url}{api}"
    data = {
        "query": "如何提問以獲得高質量答案",
        "knowledge_base_name": "samples",
        "history": [
            {
                "role": "user",
                "content": "你好"
            },
            {
                "role": "assistant",
                "content": "你好,我是 ChatGLM"
            }
        ],
        "stream": True
    }
    result = []
    response = requests.post(url, headers=headers, json=data, stream=True)

    for line in response.iter_content(None, decode_unicode=True):
        data = json.loads(line)
        result.append(data)

    return result

from concurrent.futures import ThreadPoolExecutor, as_completed
import time

threads = []
times = []
pool = ThreadPoolExecutor()
start = time.time()
for i in range(10):
    t = pool.submit(knowledge_chat)
    threads.append(t)

for r in as_completed(threads):
    end = time.time()
    times.append(end - start)
    print("\nResult:\n")
    pprint(r.result())

print("\nTime used:\n")
for x in times:
    print(f"{x}")

透過concurrent的ThreadPoolExecutor, as_completed進行反饋。


3 知識庫相關實踐問題

3.1 .md格式的檔案 支援非常差

我們在configs/kb_config.py可以看到:

# TextSplitter配置項,如果你不明白其中的含義,就不要修改。
text_splitter_dict = {
    "ChineseRecursiveTextSplitter": {
        "source": "huggingface",   # 選擇tiktoken則使用openai的方法
        "tokenizer_name_or_path": "",
    },
    "SpacyTextSplitter": {
        "source": "huggingface",
        "tokenizer_name_or_path": "gpt2",
    },
    "RecursiveCharacterTextSplitter": {
        "source": "tiktoken",
        "tokenizer_name_or_path": "cl100k_base",
    },
    "MarkdownHeaderTextSplitter": {
        "headers_to_split_on":
            [
                ("#", "head1"),
                ("##", "head2"),
                ("###", "head3"),
                ("####", "head4"),
            ]
    },
}

# TEXT_SPLITTER 名稱
TEXT_SPLITTER_NAME = "ChineseRecursiveTextSplitter"

chatchat看上去建立新知識庫的時候,僅支援一個知識庫一個TEXT_SPLITTER_NAME 的方法,並不能做到不同的檔案,使用不同的切塊模型。 所以如果要一個知識庫內,不同檔案使用不同的切分方式,需要自己改整個結構程式碼;然後重啟專案

同時,chatchat專案對markdown的原始檔,支援非常差,我們來看看:

from langchain import document_loaders
from server.knowledge_base.utils import make_text_splitter

# 載入
filepath = "matt/智慧XXX.md"
loader = document_loaders.UnstructuredFileLoader(filepath,autodetect_encoding=True)
docs = loader.load()

# 切分
splitter_name = 'ChineseRecursiveTextSplitter'
text_splitter = make_text_splitter(splitter_name, CHUNK_SIZE, OVERLAP_SIZE)
if splitter_name == "MarkdownHeaderTextSplitter":
    docs = text_splitter.split_text(docs[0].page_content)
    for doc in docs:
        if doc.metadata:
            doc.metadata["source"] = os.path.basename(filepath)
else:
    docs = text_splitter.split_documents(docs)
for doc in docs:
    print(doc)

首先chatchat對.md檔案讀入使用的是UnstructuredFileLoader

但是沒有加mode="elements"(參考:LangChain:萬能的非結構化文件載入詳解(一)

所以,你可以認為,讀入後,#會出現丟失,於是你即使選擇了MarkdownHeaderTextSplitter,也還是無法使用。 目前來看,不建議上傳.md格式的文件,比較好的方法是:

  • - 檔案改成 doc,可以帶# / ## / ###
  • - 更改configs/kb_config.py當中的TEXT_SPLITTER_NAME = "MarkdownHeaderTextSplitter"

3.2 PDF 檔案讀入 + MarkdownHeaderTextSplitter 分割的可行性

在chatchat專案中,PDF檔案的讀入是RapidOCRPDFLoader

可能需要下載:

!pip install pyMuPDF  -i https://pypi.tuna.tsinghua.edu.cn/simple
!pip install rapidocr_onnxruntime  -i https://pypi.tuna.tsinghua.edu.cn/simple
!pip install unstructured==0.11.0  -i https://pypi.tuna.tsinghua.edu.cn/simple
!pip install opencv-python-headless  -i https://pypi.tuna.tsinghua.edu.cn/simple

其中,沒有opencv-python-headless,可能會報錯:ImportError: libGL.so.1: cannot open shared object file: No such file or directory

document_loaders.mypdfloader

from typing import List
from langchain.document_loaders.unstructured import UnstructuredFileLoader
import tqdm


class RapidOCRPDFLoader(UnstructuredFileLoader):
    def _get_elements(self) -> List:
        def pdf2text(filepath):
            import fitz # pyMuPDF裡面的fitz包,不要與pip install fitz混淆
            from rapidocr_onnxruntime import RapidOCR
            import numpy as np
            ocr = RapidOCR()
            doc = fitz.open(filepath)
            resp = ""

            b_unit = tqdm.tqdm(total=doc.page_count, desc="RapidOCRPDFLoader context page index: 0")
            for i, page in enumerate(doc):

                # 更新描述
                b_unit.set_description("RapidOCRPDFLoader context page index: {}".format(i))
                # 立即顯示進度條更新結果
                b_unit.refresh()
                # TODO: 依據文字與圖片順序調整處理方式
                text = page.get_text("")
                resp += text + "\n"

                img_list = page.get_images()
                for img in img_list:
                    pix = fitz.Pixmap(doc, img[0])
                    img_array = np.frombuffer(pix.samples, dtype=np.uint8).reshape(pix.height, pix.width, -1)
                    result, _ = ocr(img_array)
                    if result:
                        ocr_result = [line[1] for line in result]
                        resp += "\n".join(ocr_result)

                # 更新進度
                b_unit.update(1)
            return resp

        text = pdf2text(self.file_path)
        from unstructured.partition.text import partition_text
        return partition_text(text=text, **self.unstructured_kwargs)


if __name__ == "__main__":
    loader = RapidOCRPDFLoader(file_path="tests/samples/ocr_test.pdf")
    docs = loader.load()
    print(docs)

本節筆者測試的是pdf文件中,帶#,是否可以使用MarkdownHeaderTextSplitter 進行分割。

測試的程式碼:

from langchain import document_loaders
from server.knowledge_base.utils import make_text_splitter
import os

CHUNK_SIZE = 250
OVERLAP_SIZE = 50
filepath = "xxx.pdf"

# 文件讀入
loader = RapidOCRPDFLoader(file_path=filepath)
docs = loader.load()



import langchain
text_splitter_dict = {
    "ChineseRecursiveTextSplitter": {
        "source": "huggingface",   # 選擇tiktoken則使用openai的方法
        "tokenizer_name_or_path": "",
    },
    "SpacyTextSplitter": {
        "source": "huggingface",
        "tokenizer_name_or_path": "gpt2",
    },
    "RecursiveCharacterTextSplitter": {
        "source": "tiktoken",
        "tokenizer_name_or_path": "cl100k_base",
    },
    "MarkdownHeaderTextSplitter": {
        "headers_to_split_on":
            [
                ("#", "標題1"),
                ("##", "標題2"),
                ("###", "標題3"),
                ("####", "標題4"),
            ]
    },
}
splitter_name = 'MarkdownHeaderTextSplitter'

headers_to_split_on = text_splitter_dict[splitter_name]['headers_to_split_on']
text_splitter = langchain.text_splitter.MarkdownHeaderTextSplitter(
    headers_to_split_on=headers_to_split_on)

docs_2 = text_splitter.split_text(docs[0].page_content)
docs_2

首先結論是:

  • 讀入後,可以按照 # 進行分割,但是會出現某個塊字數很多的情況

所以,一般情況下,如果使用MarkdownHeaderTextSplitter,可能後面要再接一個分塊器,目前chatchat是不支援多個分塊器同時使用的。

markdown_document = "# Intro \n\n    ## History \n\n Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9] \n\n Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files. \n\n ## Rise and divergence \n\n As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \n\n additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \n\n #### Standardization \n\n From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort. \n\n ## Implementations \n\n Implementations of Markdown are available for over a dozen programming languages."

headers_to_split_on = [
    ("#", "Header 1"),
    ("##", "Header 2"),
]

# MD splits
markdown_splitter = MarkdownHeaderTextSplitter(
    headers_to_split_on=headers_to_split_on, strip_headers=False
)
md_header_splits = markdown_splitter.split_text(markdown_document)

# Char-level splits
from langchain.text_splitter import RecursiveCharacterTextSplitter

chunk_size = 250
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=chunk_size, chunk_overlap=chunk_overlap
)

# Split
splits = text_splitter.split_documents(md_header_splits)
splits

4 webui.py 跨域問題:嘗試解決

chatchat整個架構是:langchain框架支援透過基於FastAPI提供的 API 呼叫服務,或使用基於Streamlit的 WebUI 進行操作。

所以由FastAPI提供所有server的服務,然後webUI這邊是獨立執行,同時呼叫FastAPI

如果有跨域問題可能會出現:

  • web端一直顯示please wait
  • 無法建立websocket連結
關於跨域問題,筆者其實不是特別懂,不過聽一位前輩形象提過,
不同的服務呼叫就像兩個品牌(比如:東北館子,麥當勞),你在麥當勞要吃鍋包肉,麥當勞員工問了旁邊的東北館子,人家不想賣你鍋包肉
所以,如果要想滿足客戶需求,東北館子就需要設定允許調貨的命令

其中,FastAPI在檔案Langchain-Chatchat/startup.py 透過app.add_middleware設定了跨域:

def create_openai_api_app(
        controller_address: str,
        api_keys: List = [],
        log_level: str = "INFO",
) -> FastAPI:
    import fastchat.constants
    fastchat.constants.LOGDIR = LOG_PATH
    from fastchat.serve.openai_api_server import app, CORSMiddleware, app_settings
    from fastchat.utils import build_logger
    logger = build_logger("openai_api", "openai_api.log")
    logger.setLevel(log_level)

    app.add_middleware(
        CORSMiddleware,
        allow_credentials=True,
        allow_origins=["*"],
        allow_methods=["*"],
        allow_headers=["*"],
    )

    sys.modules["fastchat.serve.openai_api_server"].logger = logger
    app_settings.controller_address = controller_address
    app_settings.api_keys = api_keys

    MakeFastAPIOffline(app)
    app.title = "FastChat OpeanAI API Server"
    return app

那麼webui在Langchain-Chatchat/startup.py是透過cmd 直接跑的,所以跟 FastAPI是獨立結構。

def run_webui(started_event: mp.Event = None, run_mode: str = None):
    from server.utils import set_httpx_config
    set_httpx_config()

    host = WEBUI_SERVER["host"]
    port = WEBUI_SERVER["port"]

    cmd = ["streamlit", "run", "webui.py",
            "--server.address", host,
            "--server.port", str(port),
            "--theme.base", "light",
            "--theme.primaryColor", "#165dff",
            "--theme.secondaryBackgroundColor", "#f5f5f5",
            "--theme.textColor", "#000000",
        ]
    if run_mode == "lite":
        cmd += [
            "--",
            "lite",
        ]
    p = subprocess.Popen(cmd)
    started_event.set()
    p.wait()

那麼針對Streamlit的跨域,在issue提到了:

跨域問題,已解決
在startup.py中進行修改
p = subprocess.Popen(["streamlit", "run", "webui.py",
"--server.enableCORS", "false",
"--server.address", host,
"--server.port", str(port)])

但是在雲託管docker部署streamlit後無法建立websocket連結? | 微信開放社群 也提到了,設定了也無法使用:

嘗試過在streamlit run後面加--server.enableXsrfProtection=false --server.enableCORS=false --server.enableWebsocketCompression=false --browser.serverAddress=公網域名 --server.port=80 中的一個或幾個都沒用,當然改server.port也會對應修改EXPOSE埠號和流水線的埠號。

嘗試把https改成http後訪問也沒用。

筆者自己嘗試的時候,單獨設定"--server.enableCORS", "false",會出現提示:

Warning: the config option 'server.enableCORS=false' is not
 compatible with 'server.enableXsrfProtection=true'.
As a result, 'server.enableCORS' is being overridden to 'true'.

More information:
In order to protect against CSRF attacks, we send a cookie with each request.
To do so, we must specify allowable origins, which places a restriction on
cross-origin resource sharing.

If cross origin resource sharing is required, please disable server.enableXsrfProtection.
     

然後參考streamlit官方的configuration資訊,Configuration - Streamlit Docs,其中有記錄:

# Enables support for Cross-Origin Resource Sharing (CORS) protection, for
# added security.
# Due to conflicts between CORS and XSRF, if `server.enableXsrfProtection` is
# on and `server.enableCORS` is off at the same time, we will prioritize
# `server.enableXsrfProtection`.
# Default: true
enableCORS = true

# Enables support for Cross-Site Request Forgery (XSRF) protection, for added
# security.
# Due to conflicts between CORS and XSRF, if `server.enableXsrfProtection` is
# on and `server.enableCORS` is off at the same time, we will prioritize
# `server.enableXsrfProtection`.
# Default: true
enableXsrfProtection = true

所以筆者最終的使用是在Langchain-Chatchat/startup.py 中加了server.enableCORServer.enableXsrfProtectionfalse選項:

def run_webui(started_event: mp.Event = None, run_mode: str = None):
    from server.utils import set_httpx_config
    set_httpx_config()

    host = WEBUI_SERVER["host"]
    port = WEBUI_SERVER["port"]

    cmd = ["streamlit", "run", "webui.py",
            "--server.address", host,
            "--server.port", str(port),
            "--theme.base", "light",
            "--server.enableCORS", "false",
            "--server.enableXsrfProtection", "false",
            "--theme.primaryColor", "#165dff",
            "--theme.secondaryBackgroundColor", "#f5f5f5",
            "--theme.textColor", "#000000",
        ]
    if run_mode == "lite":
        cmd += [
            "--",
            "lite",
        ]
    p = subprocess.Popen(cmd)
    started_event.set()
    p.wait()

相關文章