當我們使用ChatGPT完成某些工作的時候,往往需要多輪對話,比如讓ChatGPT分析、翻譯、總結一篇網上的文章或者文件,再將總結的結果以文字的形式儲存在本地。過程中免不了要和ChatGPT“折衝樽俎”一番,事實上,這個“交涉”的過程也可以自動化,AutoGPT可以幫助我們自動拆解任務,沒錯,程式能做到的事情,人類絕不親力親為。
我們唯一需要做的,就是告訴AutoGPT一個任務目標,AutoGPT會自動根據任務目標將任務拆解成一個個的小任務,並且逐個完成,簡單且高效。
配置AutoGPT
先確保本地環境安裝好了Python3.10.9。
接著執行Git命令拉取專案:
git clone https://github.com/Significant-Gravitas/Auto-GPT.git
隨後進入專案的目錄:
cd Auto-GPT
安裝相關的依賴庫:
pip3 install -r requirements.txt
安裝成功後,複製一下專案的配置檔案:
cp .env.template .env
這裡透過cp命令將配置檔案模版.env.template複製成為一個新的配置檔案.env。
隨後將Openai的秘鑰填入配置檔案:
### OPENAI
# OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
# TEMPERATURE - Sets temperature in OpenAI (Default: 0)
# USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=您的秘鑰
TEMPERATURE=0
USE_AZURE=False
除了Openai官方的介面秘鑰,AutoGPT也支援微軟Azure的介面。
如果希望使用微軟Azure的介面,需要將配置中的USE_AZURE設定為True,隨後複製azure.yaml.template配置模版為新的azure.yaml配置檔案。
接著將微軟Azure服務的秘鑰填入azure.yaml即可。
由於微軟Azure接入Openai介面需要極其複雜的申請流程,這裡還是直接使用OpenAI官方的介面。
當然了,如果不想在本地裝那麼多依賴,也可以透過Docker來構建Auto-GPT的容器:
docker build -t autogpt .
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_workspace autogpt
這裡Docker會自動讀取專案中的Dockerfile配置檔案進行構建,相當方便。
至此,Auto-GPT就配置好了。
執行Auto-GPT
在專案根目錄執行命令:
python3 -m autogpt --debug
即可啟動AutoGPT:
➜ Auto-GPT git:(master) python -m autogpt --debug
Warning: The file 'AutoGpt.json' does not exist. Local memory would not be saved to a file.
Debug Mode: ENABLED
Welcome to Auto-GPT! Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI: For example, 'Entrepreneur-GPT'
AI Name:
首先建立AutoGPT機器人的名字:
AI Name: v3u.cn
v3u.cn here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
v3u.cn is:
建立好名字以後,Auto-GPT就可以隨時為您效勞了。
首先為AutoGPT設定目標:
v3u.cn is: Analyze the contents of this article,the url is https://v3u.cn/a_id_303,and write the result to goal.txt
這裡我們要求AutoGPT分析並且總結v3u.cn/a_id_303這篇文章,並且將分析結果寫入本地的goal.txt檔案。
程式返回:
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1:
Using memory of type: LocalCache
AutoGPT會告訴你可以最多拆解為五個任務,我們可以自己拆解,也可以讓機器人幫助我們拆解,直接按回車,讓AutoGPT自動拆解任務即可。
接著程式會自動爬取這篇文章的內容,然後使用gpt-3.5-turbo模型來進行分析:
Goal 1:
Using memory of type: LocalCache
Using Browser: chrome
Token limit: 4000
Memory Stats: (0, (0, 1536))
Token limit: 4000
Send Token Count: 936
Tokens remaining for response: 3064
------------ CONTEXT SENT TO AI ---------------
System: The current time and date is Mon Apr 17 20:29:37 2023
System: This reminds you of these events from your past:
User: Determine which next command to use, and respond using the format specified above:
----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 3064
The JSON object is valid.
THOUGHTS: Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.
REASONING: Browsing the article will allow us to analyze its contents and determine the appropriate next steps.
PLAN:
- Browse the article
- Analyze its contents
- Determine the appropriate next steps
CRITICISM: None
NEXT ACTION: COMMAND = browse_website ARGUMENTS = {'url': 'https://v3u.cn/a_id_303', 'question': 'analyze the contents of the article'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
[WDM] - Downloading: 100%|████████████████████████████████████████████| 8.04M/8.04M [00:03<00:00, 2.42MB/s]
Text length: 6977 characters
Adding chunk 1 / 1 to memory
Summarizing chunk 1 / 1
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 300
SYSTEM: Command browse_website returned: Error: This model's maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion.
Token limit: 4000
Memory Stats: (2, (2, 1536))
Token limit: 4000
Send Token Count: 1472
Tokens remaining for response: 2528
------------ CONTEXT SENT TO AI ---------------
System: The current time and date is Mon Apr 17 20:30:19 2023
System: This reminds you of these events from your past:
['Assistant Reply: {\n "thoughts": {\n "text": "Let\'s start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.",\n "reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.",\n "plan": "- Browse the article\\n- Analyze its contents\\n- Determine the appropriate next steps",\n "criticism": "None",\n "speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."\n },\n "command": {\n "name": "browse_website",\n "args": {\n "url": "https://v3u.cn/a_id_303",\n "question": "analyze the contents of the article"\n }\n }\n} \nResult: Command browse_website returned: Error: This model\'s maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion. \nHuman Feedback: GENERATE NEXT COMMAND JSON ']
User: Determine which next command to use, and respond using the format specified above:
Assistant: {
"thoughts": {
"text": "Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.",
"reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.",
"plan": "- Browse the article\n- Analyze its contents\n- Determine the appropriate next steps",
"criticism": "None",
"speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."
},
"command": {
"name": "browse_website",
"args": {
"url": "https://v3u.cn/a_id_303",
"question": "analyze the contents of the article"
}
}
}
User: Determine which next command to use, and respond using the format specified above:
----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 2528
最後將分析結果寫入goal.txt檔案:
這篇文章主要闡釋了蘋果Mac電腦可以完成機器學習和深度學習任務,並且透過深度學習框架Tensorflow的安裝和執行進行了佐證,同時也對Tensorflow的CPU和GPU的兩種模型訓練模式進行了深度對比和測試。
一氣呵成,流暢絲滑。
結語
AutoGPT和其他 AI 程式的不同之處在於,它專門專注於在無需人工干預的情況下生成提示和自動執行多步驟任務。它還具有掃描網際網路或在使用者計算機上執行命令以獲取資訊的能力,這使其有別於可能僅依賴於預先存在的資料集的其他人工智慧程式。
AutoGPT的底層邏輯並不複雜:先透過搜尋引擎檢索任務,然後把結果和目標丟給gpt讓它給出序列化方案json,再把方案分段丟給gpt,最後用shell去建立Python檔案+json.load並且執行,是一個反覆遞迴的過程。
不能否認的是,雖然實現邏輯簡單,但這無疑是一種“自我進化”的過程,相信隨著時間的推移,AutoGPT可以更好地處理愈加複雜的任務。