Skip to main content

Source Code Deployment

1. Clone the project

git clone https://github.com/zhayujie/chatgpt-on-wechat
cd chatgpt-on-wechat/
For network issues, use the mirror: https://gitee.com/zhayujie/chatgpt-on-wechat

2. Install dependencies

Core dependencies (required):
pip3 install -r requirements.txt
Optional dependencies (recommended):
pip3 install -r requirements-optional.txt

3. Configure

Copy the config template and edit:
cp config-template.json config.json
Fill in model API keys, channel type, and other settings in config.json. See the model docs for details.

4. Run

Local run:
python3 app.py
By default, the Web service starts. Access http://localhost:9899/chat to chat. Background run on server:
nohup python3 app.py & tail -f nohup.out

Docker Deployment

Docker deployment does not require cloning source code or installing dependencies. For Agent mode, source deployment is recommended for broader system access.
Requires Docker and docker-compose.
1. Download config
wget https://cdn.link-ai.tech/code/cow/docker-compose.yml
Edit docker-compose.yml with your configuration. 2. Start container
sudo docker compose up -d
3. View logs
sudo docker logs -f chatgpt-on-wechat

Core Configuration

{
  "channel_type": "web",
  "model": "MiniMax-M2.5",
  "agent": true,
  "agent_workspace": "~/cow",
  "agent_max_context_tokens": 40000,
  "agent_max_context_turns": 30,
  "agent_max_steps": 15
}
ParameterDescriptionDefault
channel_typeChannel typeweb
modelModel nameMiniMax-M2.5
agentEnable Agent modetrue
agent_workspaceAgent workspace path~/cow
agent_max_context_tokensMax context tokens40000
agent_max_context_turnsMax context turns30
agent_max_stepsMax decision steps per task15
Full configuration options are in the project config.py.