Docker下使用llama.cpp部署帶Function calling和Json Mode功能的Mistral 7B模型
說明:
- 首次發表日期:2024-08-27
- 參考:
- https://www.markhneedham.com/blog/2024/06/23/mistral-7b-function-calling-llama-cpp/
- https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#function-calling
- https://github.com/abetlen/llama-cpp-python/tree/main/docker#cuda_simple
- https://docs.mistral.ai/capabilities/json_mode/
- https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF
- https://stackoverflow.com/questions/30905674/newer-versions-of-docker-have-cap-add-what-caps-can-be-added
- https://man7.org/linux/man-pages/man7/capabilities.7.html
- https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities
- https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
- https://www.cnblogs.com/davis12/p/14453690.html
下載GGUF模型
使用HuggingFace的映象 https://hf-mirror.com/
方式一:
pip install -U huggingface_hub
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --resume-download MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF --include *Q4_K_M.gguf
方式二(推薦):
sudo apt update
sudo apt install aria2 git-lfs
wget https://hf-mirror.com/hfd/hfd.sh
chmod a+x hfd.sh
./hfd.sh MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF --include *Q4_K_M.gguf --tool aria2c -x 16 --local-dir MaziyarPanahi--Mistral-7B-Instruct-v0.3-GGUF
使用Docker部署服務
構建之前需要先安裝NVIDIA Container Toolkit
安裝NVIDIA Container Toolkit
準備:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
安裝:
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
配置docker
sudo nvidia-ctk runtime configure --runtime=docker
NVIDIA Container Toolkit 安裝的更多資訊請參考官方文件: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
構建映象
使用官方的Dockerfile: https://github.com/abetlen/llama-cpp-python/blob/main/docker/cuda_simple/Dockerfile
ARG CUDA_IMAGE="12.2.0-devel-ubuntu22.04"
FROM nvidia/cuda:${CUDA_IMAGE}
# We need to set the host to 0.0.0.0 to allow outside access
ENV HOST 0.0.0.0
RUN apt-get update && apt-get upgrade -y \
&& apt-get install -y git build-essential \
python3 python3-pip gcc wget \
ocl-icd-opencl-dev opencl-headers clinfo \
libclblast-dev libopenblas-dev \
&& mkdir -p /etc/OpenCL/vendors && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
COPY . .
# setting build related env vars
ENV CUDA_DOCKER_ARCH=all
ENV GGML_CUDA=1
# Install depencencies
RUN python3 -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette pydantic-settings starlette-context
# Install llama-cpp-python (build with cuda)
RUN CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
# Run the server
CMD python3 -m llama_cpp.server
因為我本地安裝的CUDA版本為12.2,所以將base映象改為nvidia/cuda:12.2.0-devel-ubuntu22.04
docker build -t llama_cpp_cuda_simple .
啟動服務
docker run --gpus=all --cap-add SYS_RESOURCE -e USE_MLOCK=0 -e model=/models/downloaded/MaziyarPanahi--Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3.Q4_K_M.gguf -e n_gpu_layers=-1 -e chat_format=chatml-function-calling -v /mnt/d/16-LLM-Cache/llama_cpp_gnuf:/models -p 8000:8000 -t llama_cpp_cuda_simple
其中:
-v
將本地資料夾對映到容器內部資料夾/models
--gpus=all
表示使用所有的GPU--cap-add SYS_RESOURCE
表示容器將有SYS_RESOURCE的許可權- 其中以
-e
開頭的表示設定環境變數,實際上是設定llama_cpp.server的引數,相關程式碼詳見 https://github.com/abetlen/llama-cpp-python/blob/259ee151da9a569f58f6d4979e97cfd5d5bc3ecd/llama_cpp/server/main.py#L79 和 https://github.com/abetlen/llama-cpp-python/blob/259ee151da9a569f58f6d4979e97cfd5d5bc3ecd/llama_cpp/server/settings.py#L17 這裡設定的環境變數是大小寫不敏感的,見 https://docs.pydantic.dev/latest/concepts/pydantic_settings/#case-sensitivity-e model
指向模型檔案-e n_gpu_layers=-1
表示將所有神經網路層移到GPU- 假設模型一共有N層,其中n_gpu_layers層被放在GPU上,那麼剩下的 N - n_gpu_layers 就會被放在CPU上
-e chat_format=chatml-function-calling
設定以支援Function Calling功能
啟動完成後,在瀏覽器開啟 http://localhost:8000/docs 檢視API文件
呼叫測試
Function Calling
curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-xxxxxxxxxxxxxxxxxxxxxx' \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant.\nYou can call functions with appropriate input when necessary"
},
{
"role": "user",
"content": "What'\''s the weather like in Mauritius?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given latitude and longitude",
"parameters": {
"type": "object",
"properties": {
"latitude": {
"type": "number",
"description": "The latitude of a place"
},
"longitude": {
"type": "number",
"description": "The longitude of a place"
}
},
"required": ["latitude", "longitude"]
}
}
}
],
"tool_choice": "auto"
}'
輸出:
{
"id": "chatcmpl-50c8e261-2b1a-4285-a6ee-e18a07ce92d9",
"object": "chat.completion",
"created": 1724757544,
"model": "gpt-3.5-turbo",
"choices": [
{
"index": 0,
"message": {
"content": null,
"tool_calls": [
{
"id": "call__0_get_current_weather_cmpl-97515c72-d214-4ed9-b183-7736199e5be1",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"latitude\": -20.375, \"longitude\": 57.568} "
}
}
],
"role": "assistant",
"function_call": {
"name": "",
"arguments": "{\"latitude\": -20.375, \"longitude\": 57.568} "
}
},
"logprobs": null,
"finish_reason": "tool_calls"
}
],
"usage": {
"prompt_tokens": 299,
"completion_tokens": 25,
"total_tokens": 324
}
}
JSON Mode
curl --location "http://localhost:8000/v1/chat/completions" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer sk-xxxxxxxxxxxxxxxxxxxxxx" \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What is the best French cheese? Return the product and produce location in JSON format"
}
],
"response_format": {"type": "json_object"}
}'
輸出:
{
"id": "chatcmpl-bbfecfc5-2ea9-4052-93b2-08f1733e8219",
"object": "chat.completion",
"created": 1724757752,
"model": "gpt-3.5-turbo",
"choices": [
{
"index": 0,
"message": {
"content": "{\n \"product\": \"Roquefort\",\n \"produce_location\": \"France, South of France\"\n}\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t",
"role": "assistant"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 44,
"completion_tokens": 50,
"total_tokens": 94
}
}
使用以下程式碼將content部分寫入到文字:
text = "{\n \"product\": \"Roquefort\",\n \"location\": \"France, South of France\"\n}\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t"
with open('resp.txt', 'w') as f:
f.write(text)
可以看到內容:
{
"product": "Roquefort",
"location": "France, South of France"
}