AgentVerse commited on
Commit
8d1bc11
1 Parent(s): 9143be1

Update README.md

Browse files
Files changed (2) hide show
  1. README.md +423 -7
  2. README_zh.md +373 -0
README.md CHANGED
@@ -1,13 +1,429 @@
1
  ---
2
  title: AgentVerse
3
- emoji: 🔥
4
- colorFrom: indigo
5
- colorTo: indigo
6
  sdk: gradio
7
- sdk_version: 3.47.1
8
- app_file: app.py
9
- pinned: false
10
  license: apache-2.0
 
 
 
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: AgentVerse
 
 
 
3
  sdk: gradio
 
 
 
4
  license: apache-2.0
5
+ emoji: 🤖
6
+ colorFrom: indigo
7
+ colorTo: indigo
8
  ---
9
 
10
+ <h1 align="center"> 🤖 AgentVerse 🪐 </h1>
11
+
12
+ <h3 align="center">
13
+ <p>A Framework for Multi-LLM Environment Simulation</p>
14
+ </h3>
15
+
16
+ <p align="center">
17
+ <a href="https://github.com/OpenBMB/AgentVerse/blob/main/LICENSE">
18
+ <img alt="License: Apache2" src="https://img.shields.io/badge/License-Apache_2.0-green.svg">
19
+ </a>
20
+ <a href="https://www.python.org/downloads/release/python-3916/">
21
+ <img alt="Python Version" src="https://img.shields.io/badge/python-3.9+-blue.svg">
22
+ </a>
23
+ <a href="https://github.com/OpenBMB/AgentVerse/actions/">
24
+ <img alt="Build" src="https://img.shields.io/github/actions/workflow/status/OpenBMB/AgentVerse/test.yml">
25
+ </a>
26
+ <a href="https://github.com/psf/black">
27
+ <img alt="Code Style: Black" src="https://img.shields.io/badge/code%20style-black-black">
28
+ </a>
29
+ <a href="https://github.com/OpenBMB/AgentVerse/issues">
30
+ <img alt="Contributions: Welcome" src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat">
31
+ </a>
32
+
33
+ </p>
34
+
35
+ <p align="center">
36
+ <img src="./imgs/title.png" width="512">
37
+ </p>
38
+
39
+ <p align="center">
40
+ 【English | <a href="README_zh.md">Chinese</a>】
41
+ </p>
42
+
43
+ **AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details.
44
+
45
+ ⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch.
46
+
47
+ ---
48
+
49
+ ## ✨ Features
50
+
51
+ - 🥳 **Efficient Environment Building:** Our framework provides a collection of essential building blocks for effortlessly creating a multi-agent environment. With only a few lines in a configuration file, you can easily construct basic environments such as a chat room for LLMs. This process entails defining the environment's settings and prompts for LLMs, enabling researchers like you to concentrate on experimentation and analysis.
52
+
53
+ - ⚙️ **Customizable Components**: AgentVerse simplifies the multi-agent environment by dividing it into five functional modules and defining their respective interfaces. For complex environments that cannot be constructed directly using the basic modules offered in AgentVerse, you can customize one or more of the interfaces within these five functional modules to efficiently create your own multi-agent environment according to your requirements.
54
+
55
+ - 🛠 **Tools (Plugins) Utilization**: AgentVerse supports the multi-agent environments with tools. Currently, AgentVerse supports tools provided in [BMTools](https://github.com/OpenBMB/BMTools).
56
+
57
+ ## 📰 What's New
58
+ - [2023/10/5] 💡 We release the code of our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848), and refactor our codebase to enable the creation of both simulation and task-solving environment! We have placed the code for Minecraft example in the paper at the [`minecraft`](https://github.com/OpenBMB/AgentVerse/tree/minecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned!
59
+
60
+ - [2023/8/22] 📝 We're excited to share our work-in-progress paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) related to this repository.
61
+ <p align="center">
62
+ <img width="616" alt="Screen Shot 2023-09-01 at 12 08 57 PM" src="https://github.com/OpenBMB/AgentVerse/assets/11704492/6db1c907-b7fc-42f9-946c-89853a28f386">
63
+ </p>
64
+
65
+ - [2023/6/5] 🎉 We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun!
66
+ - [2023/5/1] 🚀 [AgentVerse](https://github.com/OpenBMB/AgentVerse) is officially launched!
67
+
68
+ ## 🌟 Join Us!
69
+ AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.
70
+ ### How Can You Contribute?
71
+ - **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.
72
+
73
+ - **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.
74
+
75
+ - **Application Exploration**: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!
76
+
77
+ - **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
78
+
79
+ Also, if you're passionate about advancing the frontiers of multi-agent environments and are eager to dive deeper into research, we invite you to join our team at THUNLP. To explore this exciting opportunity and embark on a collaborative journey with us, please reach out to [[email protected]]([email protected]) and [[email protected]]([email protected]) and express your interest. We're keen to welcome motivated individuals like you to our lab!
80
+
81
+ 👉Also, check our Discord: https://discord.gg/cnutfCtC.
82
+
83
+ ## 🗓 Coming Soon
84
+ - [x] Code release of our [paper](https://arxiv.org/abs/2308.10848)
85
+ - [ ] Add documentation
86
+ - [ ] Support more sophisticated memory for conversation history
87
+ - [ ] Add support for local LLM
88
+
89
+
90
+ ## 👾 Simple Demo Video
91
+
92
+ We demonstrate the following cases that are expertly crafted by AgentVerse.
93
+ <!--
94
+ ### [![Demo video](https://i.imgur.com/vKb2F1B.png)](https://youtu.be/9JCVfzMFhaM)
95
+ -->
96
+ <!--![image](imgs/multiagent-min.gif)-->
97
+
98
+ <!-- - **NLP Classroom**: -->
99
+
100
+ #### NLP Classroom
101
+ In the NLP class, the professor and students engage in interactive communication. When students have a question, they raise their hands and patiently wait for the professor to call on them. Only after being called on by the professor, can students speak and ask their questions.
102
+
103
+ Use the following command to launch the NLP Classroom example:
104
+ ```bash
105
+ python agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players
106
+ ```
107
+
108
+ [Wacth the NLP Classroom Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/6ea07850-595e-4a28-a82e-f863011353c2)
109
+
110
+
111
+ #### Prisoner Dilemma
112
+ A prisoner's Dilemma is a thought experiment that challenges two completely rational agents to a dilemma: they can cooperate with their partner for mutual benefit or betray their partner ("defect") for individual reward.
113
+
114
+ Use the following command to launch the Prisoner Dilemma example:
115
+ ```bash
116
+ python agentverse_command/main_simulation_gui.py --task simulation/prisoner_dilemma
117
+ ```
118
+
119
+ [Wacth the Prisoner's Dilemma Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/017c46e5-c738-4fca-9352-b008e2d518bd)
120
+
121
+
122
+ #### Software Design
123
+ In the Software Design example, a code writer, a code tester and a code reviewer collaborate on the code generation problem. Given a problem, the code writer first composes the code implementation. The code tester runs the unit tests and provides the feedback. The code viewer then generates a review. After collecting the test feedback and review, the code writer iteratively refines the code.
124
+
125
+ Use the following command to launch the Software Design example:
126
+ ```bash
127
+ python agentverse_command/main_simulation_gui.py --task simulation/sde_team/sde_team_2players
128
+ ```
129
+
130
+ [Wacth the Software Design Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4e54661626a)
131
+
132
+
133
+ #### [Database Administrator (DBA)](https://github.com/TsinghuaDatabaseGroup/DB-GPT)
134
+
135
+ In the database diagnosis scenario, the Chief DBA monitors the system anomalies (e.g., slow queries, locks, crash down). If detected, the domain experts are alerted to analyze root causes, share insights, and suggest optimization solutions together. The Chief DBA then provides a summarized report to the user.
136
+
137
+ ```bash
138
+ python agentverse_command/main_simulation_gui.py --task simulation/db_diag
139
+ ```
140
+
141
+ [Wacth the DBA Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/c633419d-afbb-47d4-bb12-6bb512e7af3a)
142
+
143
+ #### [Text Evaluation (ChatEval)](https://github.com/chanchimin/ChatEval)
144
+ In the context of the text evaluation scenario, we recommend users explore the [ChatEval](https://github.com/chanchimin/ChatEval) repo. They've implemented a multi-agent referee team on AgentVerse to assess the quality of text generated by different models. When given two distinct pieces of text, roles within ChatEval can autonomously debate the nuances and disparities, drawing upon their assigned personas, and subsequently provide their judgments. Experiments indicate that their referee team, enriched with diverse roles specified in [config.yaml](#2-configuring-the-agents), aligns more closely with human evaluations. This demo is built upon the [Fastchat](https://github.com/lm-sys/FastChat) repo, and we'd like to express our appreciation for their foundational work.
145
+
146
+
147
+ [Wacth the ChatEval Video](https://github.com/OpenBMB/AgentVerse/assets/75533759/58f33468-f15b-4bac-ae01-8d0780019f85)
148
+
149
+ #### Pokemon
150
+ **Currently available only in [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
151
+
152
+ To launch the Pokemon game, first launch a local server with the following command:
153
+ ```bash
154
+ uvicorn pokemon_server:app --reload --port 10002
155
+ ```
156
+ Then open another terminal in the project's root path and run the following command:
157
+ ```bash
158
+ cd ui
159
+ # If you do not have npm installed, you need to install it before running the following commands
160
+ # https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
161
+ # We have tested on [email protected], [email protected]
162
+ npm install
163
+ npm run watch
164
+ ```
165
+ Wait for the compilation to complete, and have fun! (WASD for moving around, and SPACE for launching a conversation.)
166
+
167
+ [Wacth the Pokemon Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7)
168
+
169
+
170
+
171
+ ## Contents
172
+
173
+ - [✨ Features](#-features)
174
+ - [📰 What's New](#-whats-new)
175
+ - [🌟 Join Us!](#-join-us)
176
+ - [How Can You Contribute?](#how-can-you-contribute)
177
+ - [🗓 Coming Soon](#-coming-soon)
178
+ - [👾 Simple Demo Video](#-simple-demo-video)
179
+ - [NLP Classroom](#nlp-classroom)
180
+ - [Prisoner Dilemma](#prisoner-dilemma)
181
+ - [Software Design](#software-design)
182
+ - [Database Administrator (DBA)](#database-administrator-dba)
183
+ - [Text Evaluation (ChatEval)](#text-evaluation-chateval)
184
+ - [Pokemon](#pokemon)
185
+ - [Contents](#contents)
186
+ - [🚀 Getting Started](#-getting-started)
187
+ - [Installation](#installation)
188
+ - [Simulation CLI Example](#simulation-cli-example)
189
+ - [Simulation Local Website Demo](#simulation-local-website-demo)
190
+ - [Task-Solving CLI Example](#task-solving-cli-example)
191
+ - [💡 Philosophy](#-philosophy)
192
+ - [Environment](#environment)
193
+ - [Agent](#agent)
194
+ - [✍️ Customize Your Own Environment](#️-customize-your-own-environment)
195
+ - [A Simple Example: Building a Classroom Environment](#a-simple-example-building-a-classroom-environment)
196
+ - [1. Creating a Task Directory and Configuring the Environment](#1-creating-a-task-directory-and-configuring-the-environment)
197
+ - [2. Configuring the Agents](#2-configuring-the-agents)
198
+ - [3. Writing an Output Parser](#3-writing-an-output-parser)
199
+ - [Customization Guide for More Complex Environments](#customization-guide-for-more-complex-environments)
200
+ - [🔎 Examples](#-examples)
201
+ - [Star History](#star-history)
202
+ - [Citation](#citation)
203
+ - [Contact](#contact)
204
+
205
+
206
+
207
+ ## 🚀 Getting Started
208
+
209
+ ### Installation
210
+
211
+ ```bash
212
+ pip install -U agentverse
213
+ ```
214
+ Or you can install the package by manually cloning the latest repository
215
+ ```bash
216
+ git clone https://github.com/OpenBMB/AgentVerse.git --depth 1
217
+ cd AgentVerse
218
+ pip install -r requirements.txt
219
+ ```
220
+ Some users have reported problems installing the `orjson` required by `gradio`. One simple workaround is to install it with Anaconda `conda install -c conda-forge orjson`.
221
+
222
+ You also need to export your OpenAI API key as follows:
223
+ ```bash
224
+ # Export your OpenAI API key
225
+ export OPENAI_API_KEY="your_api_key_here"
226
+ # Or if you are using Azure
227
+ export AZURE_OPENAI_API_KEY="your_api_key_here"
228
+ export AZURE_OPENAI_API_BASE="your_api_base_here"
229
+ ```
230
+
231
+ If you want use Azure OpenAI services, pleas export your Azure OpenAI key and OpenAI API base as follows:
232
+ ```bash
233
+ export AZURE_OPENAI_API_KEY="your_api_key_here"
234
+ export AZURE_OPENAI_API_BASE="your_api_base_here"
235
+ ```
236
+
237
+ If you want to use the tools provided by BMTools, you need to install BMTools as follows:
238
+ ```bash
239
+ git clone git+https://github.com/OpenBMB/BMTools.git
240
+ cd BMTools
241
+ pip install -r requirements.txt
242
+ python setup.py develop
243
+ ```
244
+
245
+
246
+ <!--
247
+ # Install BMTools
248
+ cd ../
249
+ git clone [email protected]:OpenBMB/BMTools.git
250
+ cd BMTools
251
+ python setup.py develop
252
+ -->
253
+
254
+ ### Simulation CLI Example
255
+
256
+ You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
257
+
258
+ ```shell
259
+ python3 agentverse_command/main_simulation_cli.py --task simulation/nlp_classroom_9players
260
+ # or if you have installed AgentVerse via pip
261
+ agentverse-simulation --task simulation/nlp_classroom_9players
262
+ ```
263
+
264
+ ### Simulation Local Website Demo
265
+
266
+ We also provide a local website demo for this environment. You can launch it with
267
+
268
+ ```shell
269
+ python3 agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players
270
+ # or if you have installed AgentVerse via pip
271
+ agentverse-simulation-gui --task simulation/nlp_classroom_9players
272
+ ```
273
+ After successfully launching the local server, you can visit [http://127.0.0.1:7860/](http://127.0.0.1:7860/) to view the classroom environment.
274
+
275
+ ### Task-Solving CLI Example
276
+
277
+ To run the experiments with the task-solving environment proposed in our [paper](https://arxiv.org/abs/2308.10848), you can use the following command:
278
+
279
+ ```shell
280
+ # Run the Humaneval benchmark using gpt-3.5-turbo
281
+ python3 agentverse_command/main_tasksolving_cli.py --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
282
+ # or if you have installed AgentVerse via pip
283
+ agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
284
+ ```
285
+
286
+ You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
287
+
288
+
289
+ ## 💡 Philosophy
290
+
291
+ ### Environment
292
+
293
+ At the core of our framework is the environment, which plays a crucial role in enabling researchers to study the behavior of agents under different conditions. We believe that the environment should be flexible and extensible, allowing researchers to easily customize it to fit their needs. To achieve this, we have abstracted the environment into five rule components, and implementing different environments is actually implementing different rules:
294
+
295
+ - **Describer**: This component provides a description of the environment at each turn for each agent. You can customize the describer to define the specific requirements of their environment, such as the agents with whom an agent can interact.
296
+ - **Order**: This component defines the order in which agents take actions within the environment. You can customize the order to reflect the desired interaction between agents. We provide several basic order options, including `random`, `sequential`, and `concurrent` (in which all agents take an action in each turn).
297
+ - **Selector**: This component selects the valid messages generated by agents. Sometimes agents may generate invalid responses, and the selector is used to filter out unexpected results.
298
+ - **Updater**: This component updates the memory of each agent. In certain cases, the response generated by one agent should not be seen by all agents (e.g., if agents are in different rooms). For each response, the updater updates only the agents who can see it.
299
+ - **Visibility**: This component maintains the list of agents that each agent can see throughout the environment's changes. For example, when an agent moves from one room to another, the list of visible agents of each agent should be updated by `visibility`.
300
+
301
+ By abstracting the environment into these five components, we have created a highly flexible and extensible framework that enables researchers to easily build and customize their own multi-agent environments.
302
+
303
+ ### Agent
304
+
305
+ Another fundamental component is the agent. Currently we provide two types of agents: **ConversationAgent** and **ToolAgent**. You can also customize your own agent by inheriting BaseAgent class (tutorial coming soon).
306
+
307
+ ## ✍️ Customize Your Own Environment
308
+
309
+ We have provided several examples in the `agentverse/tasks` directory. To customize your environment, you should
310
+
311
+ 1. Create a task directory in `agentverse/tasks`
312
+ 2. Write the configuration file
313
+ 3. Write the output parser that parses the response of your agents.
314
+ 4. Add your parser in `agentverse/tasks/__init__.py`
315
+
316
+ We will use a simple example in `agentverse/tasks/nlp_classroom_3players` to illustrate the procedure.
317
+
318
+ ### A Simple Example: Building a Classroom Environment
319
+
320
+ To illustrate how to customize your environment, we'll use a simple example of building a classroom environment where one agent is the professor, one is the student, and one is the teaching assistant.
321
+
322
+ ##### 1. Creating a Task Directory and Configuring the Environment
323
+
324
+ First, we need to create a task directory and write our configuration file for the environment. In the `agentverse/tasks` directory, create a new directory called `nlp_classroom_3players`. Inside this directory, create a `config.yaml` file and write the following configuration:
325
+
326
+ ```yaml
327
+ # config.yaml
328
+ environment:
329
+ env_type: basic # Use the basic environment provided in AgentVerse
330
+ max_turns: 10 # Specify the maximum number of dialogue turns
331
+ rule:
332
+ order:
333
+ type: sequential # Use the sequential order
334
+ visibility:
335
+ type: all # Each message can be seen by all agents
336
+ selector:
337
+ type: basic # Basic selector (do not select)
338
+ updater:
339
+ type: basic # Basic updater (update the message to all agents)
340
+ describer:
341
+ type: basic # Basic describer (no description)
342
+ ```
343
+
344
+ This configuration specifies that we will use the basic environment provided in AgentVerse, with a maximum of 10 dialogue turns. We'll use the sequential order, with all messages visible to all agents. We won't be using any selectors, our updater will update the messages to all the agents and our describer will provide no description.
345
+
346
+ ##### 2. Configuring the Agents
347
+
348
+ Next, we'll configure the agents. In the `config.yaml` file, we'll add the configuration for each agent. Here's an example configuration for the professor:
349
+
350
+ ```yaml
351
+ # config.yaml
352
+ agents:
353
+ -
354
+ agent_type: conversation
355
+ name: Professor Micheal # Name of the agent
356
+ role_description: You are Prof. Micheal, ... # Description of the agent
357
+ memory:
358
+ memory_type: chat_history # Will store all the chat history
359
+ prompt_template: *professor_prompt
360
+ llm:
361
+ llm_type: text-davinci-003 # Will use OpenAICompletion LLM
362
+ model: text-davinci-003 # The arguments passed to the api call
363
+ temperature: 0.7
364
+ max_tokens: 250
365
+ ```
366
+
367
+ In this example, we'll use the `conversation` agent type. We've given the agent a name and a description, and we'll store the chat history in memory. We've also provided a prompt template with placeholders marked as ${placeholder}. These will be instantiated by the `_fill_prompt_template` method of the agent.
368
+
369
+ ##### 3. Writing an Output Parser
370
+
371
+ The next step is to write a simple parser for your agent's response. Because you may have specified the output format in your prompt template, you need to provide a corresponding parser. In this example, we inform the model to output in the following format in our prompt template
372
+
373
+ ```
374
+ Action: Speak
375
+ Action Input: (the content)
376
+ ```
377
+
378
+ We'll write a parser to extract the content from the agent's response. Refer to the code for more details. We've decorated our parser function with `@output_parser_registry.register('classroom_parser')` to register it with our framework. Finally, we import our parser in `agentverse/tasks/__init__.py`.
379
+
380
+ With these steps, we've successfully built a simple classroom environment and customized it for our needs.
381
+
382
+ ### Customization Guide for More Complex Environments
383
+
384
+ While we provide a basic framework for building environments with our five rule components, more complex environments may require further customization. A detailed documentation and tutorial is coming soon. Here we briefly introduce some steps you can take to customize your environment:
385
+
386
+ 1. **Customize the five rule components**. Each rule component has an interface, allowing you to customize its behavior to suit your specific needs. It's important to note that these components are not necessarily independent and can interact through the `rule_params` dictionary in the environment. You can create your own rule components and integrate them with the existing ones to build more complex interactions between agents.
387
+ 2. **Customize the environment itself**. Our `basic` environment provides a default execution order for the five rule components that is suitable for most cases, but you can inherit the `BaseEnvironment` class and write your own `run` method to implement a more sophisticated execution order.
388
+ 3. **Customize the agent**. Depending on your specific use case, you may also need to inherit the `BaseAgent` class. For example, you may want to use your local LLM as your agents or create agents with specialized knowledge or skills.
389
+
390
+
391
+
392
+ ## 🔎 Examples
393
+
394
+ Currently, we offer some simple examples in the `agentverse/tasks` directory, each demonstrating different possibilities of our framework. While the performance of these examples may not be optimal due to limited prompt engineering, they are intended to showcase the capabilities of our framework, such as allowing the use of tools.
395
+
396
+ Here's a brief overview of each example:
397
+
398
+ 1. `nlp_classroom_3players`: This example illustrates a simple case in which agents will speak in sequential order.
399
+ 2. `nlp_classroom_9players`: This is an NLP class example. Here, students can raise their hand when they have a question, and the professor can call on the students to let them ask. Students are only allowed to speak after they are called on.
400
+ 3. `nlp_classroom_9players_group`: This example showcases group discussions. The professor may initiate a group discussion when needed, and students can exclusively interact with fellow students within the same group during the discussion.
401
+ 4. `nlp_classroom_3players_withtool`: Students in this classroom can use Bing search API when listening to the class.
402
+ 5. `math_problem_2players_tools`: A simple example demonstrating how two agents can use the WolframAlpha API to play an arithmetic game.
403
+ 6. `prisoner_dilema`: The Prisoner's Dilemma is a thought experiment involving two rational agents facing a choice between cooperating for mutual benefit or betraying their partner for individual gain.
404
+ 7. `db_diag`: The Chief DBA monitors (agents) the database system for anomalies and alerts memory and CPU agents if any are detected. They (agents) analyze root causes and suggest optimization solutions. The Chief DBA (agent) provides a diagnosis summary to the user, who can give instructions or evaluate the proposed solutions' effectiveness.
405
+ 8. `sde_team`: In the SDE team, code writer, code tester and code reviewer collaborate on the code generation problem.
406
+ 9. `pokemon`: This example intimates Pokemon game.
407
+
408
+
409
+ ## Star History
410
+
411
+ [![Star History Chart](https://api.star-history.com/svg?repos=OpenBMB/AgentVerse&type=Date)](https://star-history.com/#OpenBMB/AgentVerse&Date)
412
+
413
+
414
+ ## Citation
415
+ If you find this repo helpful, feel free to cite us.
416
+ ```
417
+ @article{chen2023agentverse,
418
+ title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents},
419
+ author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others},
420
+ journal={arXiv preprint arXiv:2308.10848},
421
+ year={2023}
422
+ }
423
+ ```
424
+
425
+ ## Contact
426
+
427
+ Weize Chen: [email protected]
428
+
429
+ [Yusheng Su](https://yushengsu-thu.github.io/): [email protected]
README_zh.md ADDED
@@ -0,0 +1,373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center"> 🤖 AgentVerse 🪐 </h1>
2
+
3
+ <h3 align="center">
4
+ <p>一个用于搭建多智能体交互平台的框架</p>
5
+ </h3>
6
+ <p align="center">
7
+ <a href="https://github.com/OpenBMB/AgentVerse/blob/main/LICENSE">
8
+ <img alt="License: Apache2" src="https://img.shields.io/badge/License-Apache_2.0-green.svg">
9
+ </a>
10
+ <a href="https://www.python.org/downloads/release/python-3916/">
11
+ <img alt="Documentation" src="https://img.shields.io/badge/python-3.9+-blue.svg">
12
+ </a>
13
+ </p>
14
+
15
+ <p align="center">
16
+ <img src="./imgs/title.png" width="512">
17
+ </p>
18
+
19
+ <p align="center">
20
+ 【<a href="README.md">English </a> | Chinese】
21
+ </p>
22
+
23
+ **AgentVerse** 提供了一个多功能的框架,简化了为大型语言模型(LLMs)创建自定义多智能体环境的过程。旨在快速、低成本的开发和定制,我们的框架赋能研究人员专注于他们的研究,而不被实现细节所困扰。
24
+
25
+ ---
26
+
27
+ ## ✨ 特点
28
+
29
+ - 🥳 **高效的环境构建:** 我们的框架提供了一系列基础构建模块,轻松创建多智能体环境。只需在配置文件中写入几行,你就可以轻松建立如LLMs的聊天室这样的基础环境。这个过程包括为LLMs定义环境的设置和提示,使像你这样的研究者能够专注于实验和分析。
30
+
31
+ - ⚙️ **可定制组件**: AgentVerse通过将多智能体环境分为五个功能模块并定义其各自的接口来简化它。对于不能直接使用AgentVerse提供的基本模块构建的复杂环境,你可以定制这五个功能模块中的一个或多个接口,根据你的要求高效地创建自己的多智能体环境。
32
+
33
+ - 🛠 **工具(插件)利用**: AgentVerse支持多智能体环境的工具。目前,AgentVerse支持[BMTools](https://github.com/OpenBMB/BMTools)中提供的工具。
34
+
35
+ ## 📰 最新消息
36
+ - [2023/8/22] 📝 我们很高兴分享与此仓库相关的正在进行中的论文[AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848).
37
+ <img width="616" alt="Screen Shot 2023-09-01 at 12 08 57 PM" src="https://github.com/OpenBMB/AgentVerse/assets/11704492/6db1c907-b7fc-42f9-946c-89853a28f386">
38
+
39
+ You could refer the stay-tuned code in this [branch](https://github.com/OpenBMB/AgentVerse/tree/AgentVerse-TaskSolving).
40
+
41
+ - [2023/6/5] 🎉 我们很荣幸地展示了一系列 [demos](#-simple-demo-video), 包括 [NLP教室](#nlp教室), [囚徒困境](#囚徒困境), [软件开发](#软件开发), [数据库运维](#数据库运维), 以及一个简单的 [H5宝可梦游戏](#宝可梦游戏) 该游戏允许与宝可梦中的角色互动!你可以试玩这些demo,祝你玩得开心!
42
+ - [2023/5/1] 🚀 [AgentVerse](https://github.com/OpenBMB/AgentVerse) 正式发布!
43
+
44
+ ## 🌟 加入我们!
45
+ AgentVerse致力于为大型语言模型革命化多智能体环境,我们急切地寻找充满激情的合作伙伴与我们一起这一令人兴奋的旅程。
46
+
47
+ ### 您能如何贡献?
48
+ - **代码开发**: 如果您是工程师,我们希望您能够帮助我们细化、优化和扩展当前的框架。我们一直在寻找有才华的开发者来增强我们现有的特性和开发新模块。
49
+
50
+ - **文档和教程**: 如果您擅长写作,我们希望您能帮助我们改进文档,创建教程或写博客文章,使AgentVerse更容易被广大社区接受。
51
+
52
+ - **应用探索**: 如果您对多智能体应用感兴趣,并渴望使用AgentVerse进行实验,我们会很高兴支持您的旅程并看到您创造的内容!
53
+
54
+ - **反馈和建议**: 使用AgentVerse并为我们提供反馈。您的见解可以导致潜在的改进并确保我们的框架保持最佳状态。
55
+
56
+ 此外,如果您热衷于推进多智能体环境的前沿,并渴望更深入地进行研究,我们邀请您加入我们在THUNLP的团队。为了探索这一令人兴奋的机会,并与我们开始合作之旅,请联系[[email protected]]([email protected]) 和 [[email protected]]([email protected]) 表达您的兴趣。我们很乐意欢迎像您这样的有动力的个人加入我们的实验室!
57
+
58
+ ## 🗓 即将到来
59
+ - [ ] 我们的[paper](https://arxiv.org/abs/2308.10848)的代码发布
60
+ - [ ] 增加文档
61
+ - [ ] 支持更复杂的对话历史内存
62
+ - [ ] 支持本地LLM
63
+
64
+
65
+ ## 👾 Demo视频
66
+
67
+ 我们演示了由AgentVerse精心制作的以下案例。
68
+ <!--
69
+ ### [![Demo video](https://i.imgur.com/vKb2F1B.png)](https://youtu.be/9JCVfzMFhaM)
70
+ -->
71
+ <!--![image](imgs/multiagent-min.gif)-->
72
+
73
+ <!-- - **NLP Classroom**: -->
74
+
75
+ #### NLP教室
76
+ 在NLP课堂中,教授和学生进行互动交流。当学生有问题时,他们会举手并耐心等待教授指名。只有在教授点名后,学生才能发言并提问。
77
+
78
+ 使用以下命令启动NLP教室示例:
79
+ ```bash
80
+ python main_demo.py --task nlp_classroom_9players
81
+ ```
82
+
83
+ [NLP教室视频](https://github.com/OpenBMB/AgentVerse/assets/11704492/6ea07850-595e-4a28-a82e-f863011353c2)
84
+
85
+
86
+ #### 囚徒困境
87
+ 囚���的困境是一个思考实验,它挑战两个完全理性的智能体面临的困境:他们可以与伙伴合作以获得互利,或背叛伙伴("违背")以获得个人奖励。
88
+
89
+ 使用以下命令启动NLP教室示例:
90
+ ```bash
91
+ python main_demo.py --task prisoner_dilemma
92
+ ```
93
+
94
+ [囚徒困境视频](https://github.com/OpenBMB/AgentVerse/assets/11704492/017c46e5-c738-4fca-9352-b008e2d518bd)
95
+
96
+
97
+ #### 软件开发
98
+ 在软件设计示例中,代码编写者、代码测试者和代码审查者在代码生成问题上进行合作。给定一个问题,代码编写者首先撰写代码实现。代码测试者运行单元测试并提供反馈。然后,代码审查者生成评审。在收集了测试反馈和审查后,代码编写者迭代地优化代码。
99
+
100
+ 使用以下命令启动软件设计示例:
101
+ ```bash
102
+ python main_demo.py --task sde_team/sde_team_2players
103
+ ```
104
+
105
+ [软件开发视频](https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4e54661626a)
106
+
107
+
108
+ #### [数据库运维](https://github.com/zhouxh19/AgentVerse_for_Database_Diagnosis)
109
+ 在数据库诊断场景中,首席DBA监控数据库系统以查找异常。如果检测到,会提醒内存和CPU智能体进行根源分析并建议优化解决方案。然后,首席DBA向用户提供总结的诊断,用户也可以通过给予指导或评估所提议解决方案的有效性来作出贡献。
110
+
111
+ 首先,您应该在BMTools中配置[数据库工具](https://github.com/OpenBMB/BMTools/blob/main/bmtools/tools/db_diag/readme.md), 并根据[指南](https://github.com/OpenBMB/BMTools/tree/main#211-local-tools)启动BMTools服务器。然后使用以下命令启动数据库管理员示例:
112
+ ```bash
113
+ python main_demo.py --task db_diag
114
+ ```
115
+
116
+ [数据库运维视频](https://github.com/OpenBMB/AgentVerse/assets/11704492/c633419d-afbb-47d4-bb12-6bb512e7af3a)
117
+
118
+ #### [文本评估 (ChatEval)](https://github.com/chanchimin/ChatEval)
119
+ 在文本评估场景的背景下,我们建议用户探索[ChatEval](https://github.com/chanchimin/ChatEval)仓库。他们在AgentVerse上实现了一个多智能体裁判团来评估不同模型生成的文本质量。给定两个不同的文本,ChatEval中的角色可以自主地辩论其细微差别,并根据分配给他们的人物特点提供其判断。实验表明,他们的裁判团,根据[config.yaml](#2-configuring-the-agents)中规定的多样角色,与人类的评估更为接近。这个演示是基于[Fastchat](https://github.com/lm-sys/FastChat)仓库构建的,我们想对他们的基础工作表示感谢。
120
+
121
+
122
+ [文本评估视频](https://github.com/OpenBMB/AgentVerse/assets/75533759/58f33468-f15b-4bac-ae01-8d0780019f85)
123
+
124
+ #### 宝可梦游戏
125
+ 在这个简易游戏中,NPC之间可以自主互动。作为玩家,你扮演一个角色,可以随时与其他NPC互动。在这一游戏中有6个宝可梦绿宝石版中出现的角色: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) 和[Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
126
+
127
+ 要启动宝可梦游戏,首先使用以下命令启动本地服务器:
128
+ ```bash
129
+ uvicorn pokemon_server:app --reload --port 10002
130
+ ```
131
+ 然后在项目的根路径中打开另一个终端并运行以下命令:
132
+ ```bash
133
+ cd ui
134
+ # If you do not have npm installed, you need to install it before running the following commands
135
+ # https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
136
+ # We have tested on [email protected], [email protected]
137
+ npm install
138
+ npm run watch
139
+ ```
140
+ 等待编译完成。祝你玩得开心!(使用WASD移动,SPACE键启动对话。)
141
+
142
+ [宝可梦游戏视频](https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7)
143
+
144
+
145
+
146
+ ## Contents
147
+
148
+ - [✨ 特点](#-特点)
149
+ - [📰 最新消息](#-最新消息)
150
+ - [🌟 加入我们!](#-加入我们)
151
+ - [您能如何贡献?](#您能如何贡献)
152
+ - [🗓 即将到来](#-即将到来)
153
+ - [👾 Demo视频](#-demo视频)
154
+ - [NLP教室](#nlp教室)
155
+ - [囚徒困境](#囚徒困境)
156
+ - [软件开发](#软件开发)
157
+ - [数据库运维](#数据库运维)
158
+ - [文本评估 (ChatEval)](#文本评估-chateval)
159
+ - [宝可梦游戏](#宝可梦游戏)
160
+ - [Contents](#contents)
161
+ - [🚀 开始使用](#-开始使用)
162
+ - [安装](#安装)
163
+ - [命令行示例](#命令行示例)
164
+ - [本地网站演示](#本地网站演示)
165
+ - [💡 理念](#-理念)
166
+ - [Environment](#environment)
167
+ - [智能体](#智能体)
168
+ - [✍️ 定制您自己的环境](#️-定制您自己的环境)
169
+ - [一个简单的例子:构建一个教室环境](#一个简单的例子构建一个教室环境)
170
+ - [1. 创建任务目录并配置环境](#1-创建任务目录并配置环境)
171
+ - [2. 配置智能体](#2-配置智能体)
172
+ - [3. 编写一个���出解析器](#3-编写一个输出解析器)
173
+ - [更复杂环境的定制指南](#更复杂环境的定制指南)
174
+ - [🔎 示例](#-示例)
175
+ - [Star History](#star-history)
176
+ - [Citation](#citation)
177
+ - [Contact](#contact)
178
+
179
+
180
+
181
+ ## 🚀 开始使用
182
+
183
+ ### 安装
184
+
185
+ ```bash
186
+ pip install -U agentverse
187
+ ```
188
+ 或者您可以通过手动克隆最新的仓库来安装此包:
189
+ ```bash
190
+ git clone https://github.com/OpenBMB/AgentVerse.git --depth 1
191
+ cd AgentVerse
192
+ pip install -r requirements.txt
193
+ ```
194
+ 一些用户报告在安装`gradio`所需的`orjson`时遇到问题。一个简单的解决方法是使用Anaconda来安装它:`conda install -c conda-forge orjson`。
195
+
196
+ 您还需要按如下方式导出您的OpenAI API密钥:
197
+ ```bash
198
+ # 导出你的OpenAI API密钥
199
+ export OPENAI_API_KEY="your_api_key_here"
200
+ ```
201
+ 或者您想使用 Azure OpenAI 服务,请按照以下方式配置 OpenAI API 密钥和 API base:
202
+ ```bash
203
+ export AZURE_OPENAI_API_KEY="your_api_key_here"
204
+ export AZURE_OPENAI_API_BASE="your_api_base_here"
205
+ ```
206
+
207
+ 如果您想使用BMTools提供的工具,您需要按如下方式安装BMTools:
208
+ ```bash
209
+ git clone git+https://github.com/OpenBMB/BMTools.git
210
+ cd BMTools
211
+ pip install -r requirements.txt
212
+ python setup.py develop
213
+ ```
214
+
215
+ ### 命令行示例
216
+
217
+ 您可以创建由我们提供的多智能体环境。以教室场景为例。在这个场景中,有九个智能体,一个扮演教授的角色,其他八个是学生。
218
+
219
+ ```shell
220
+ python3 main.py --task nlp_classroom_9players
221
+ ```
222
+
223
+ ### 本地网站演示
224
+
225
+ 我们还为这个环境提供了一个本地网站的演示。您可以用以下命令启动它:
226
+
227
+ ```shell
228
+ python3 main_demo.py --task nlp_classroom_9players
229
+ ```
230
+ 成功启动本地服务器后,您可以访问[http://127.0.0.1:7860/](http://127.0.0.1:7860/) 查看教室环境。
231
+
232
+ ## 💡 理念
233
+
234
+ ### Environment
235
+
236
+ 我们框架的核心是环境,它在使研究人员能够在不同条件下研究智能体行为方面起着至关重要的作用。我们认为环境应该是灵活的和可扩展的,允许研究人员轻松地定制它以适应他们的需求。为了实现这一点,我们将环境抽象为五个规则组件,实现不同的环境实际上是实现不同的规则:
237
+
238
+ - **Describer(描述器)**:此组件为每个智能体在每一轮提供环境的描述。您可以自定义描述器来定义他们的环境的具体要求,例如一个智能体可以与哪些智能体互动。
239
+ - **Order(顺序)**:此组件定义智能体在环境中采取行动的顺序。您可以自定义顺序以反映智能体之间所需的交互。我们提供了几个基本的顺序选项,包括`random`(随机),`sequential`(连续)和`concurrent`(所有智能体在每轮都采取行动)。
240
+ - **Selector(选择器)**:此组件选择由智能体生成的有效消息。有时智能体可能生成无效的响应,选择器用于过滤出意外的结果。
241
+ - **Updater(更新器)**:此组件更新每个智能体的记忆。在某些情况下,一个智能体生成的响应不应被所有智能体看到(例如,如果智能体在不同的房间里)。对于每个响应,更新器只更新可以看到它的智能体。
242
+ - **Visibility(可见性)**:此组件维护每个智能体在环境变化中可以看到的智能体列表。例如,当一个智能体从一个房间移动到另一个房间时,每个智能体的可见智能体列表应由`visibility`更新。
243
+
244
+ 通过将环境抽象为这五个组件,我们创建了一个高度灵活且可扩展的框架,使研究人员可以轻松地构建和定制自己的多智能体环境。
245
+
246
+ ### 智能体
247
+
248
+ 另一个基本组件是智能体。目前我们提供了两种类型的智能体:**ConversationAgent(对话智能体)** 和 **ToolAgent(工具智能体)**。您还可以通过继承BaseAgent类来自定义自己的智能体。
249
+
250
+ ## ✍️ 定制您自己的环境
251
+
252
+ 我们在`agentverse/tasks`目录中提供了几个示例。要定制您的环境,您应该
253
+
254
+ 1. 在`agentverse/tasks`中创建一个任务目录
255
+ 2. 编写配置文件
256
+ 3. 编写解析您智能体响应的输出解析器。
257
+ 4. 在`agentverse/tasks/__init__.py`中添加您的解析器
258
+
259
+ 我们将使用`agentverse/tasks/nlp_classroom_3players`中的一个简单示例来说明这个程序。
260
+
261
+ ### 一个简单的例子:构建一个教室环境
262
+
263
+ 为了说明如何定制您的环境,我们将使用一个简单的示例来构建一个教室环境,其中一个智能体是教授,一个是学生,一个是助教。
264
+
265
+ ##### 1. 创建任务目录并配置环境
266
+
267
+ 首先,我们需要创建一个任务目录并为环境编写我们的配置文件。在`agentverse/tasks`目录中,创建一个新目录,名为`nlp_classroom_3players`。在此目录中,创建一个`config.yaml`文件并写入以下配置:
268
+
269
+ ```yaml
270
+ # config.yaml
271
+ environment:
272
+ env_type: basic # 使用AgentVerse中提供的基本环境
273
+ max_turns: 10 # 指定对话的最大轮数
274
+ rule:
275
+ order:
276
+ type: sequential # 使用连续的顺序
277
+ visibility:
278
+ type: all # 每条消息都可以被所有智能体看到
279
+ selector:
280
+ type: basic # 基本选择器(不选择)
281
+ updater:
282
+ type: basic # 基本更新器(将消息更新给所有智能体)
283
+ describer:
284
+ type: basic # 基本描述器(无描述)
285
+ ```
286
+
287
+ 这个配置指定我们将使用AgentVerse中提供的基本环境,对话的最大轮数为10。我们将使用连续的顺序,所有消息对所有智能体都是可见的。我们不使用任何选择器,我们的更新器会将消息更新给所有的智能体,而我们的描述器不会提供任何描述。
288
+
289
+ ##### 2. 配置智能体
290
+
291
+ 接下来,我们将配置智能体。在`config.yaml`文件中,我们将为每个智能体添加配置。以下是教授的示例配置:
292
+
293
+ ```yaml
294
+ # config.yaml
295
+ agents:
296
+ -
297
+ agent_type: conversation
298
+ name: Professor Micheal # 智能体的名称
299
+ role_description: You are Prof. Micheal, ... # 智能体的描述
300
+ memory:
301
+ memory_type: chat_history # 将存储所有的聊天记录
302
+ prompt_template: *professor_prompt
303
+ llm:
304
+ llm_type: text-davinci-003 # 将使用OpenAICompletion LLM
305
+ model: text-davinci-003 # 传递给api调用的参数
306
+ temperature: 0.7
307
+ max_tokens: 250
308
+ ```
309
+
310
+ 在此示例中,我们将使用`conversation`智能体类型。我们为智能体指定了一个名称和描述,并将聊天记录存储在内存中。我们还提供了一个带有占位符的提示模板,这些占位符标记为${placeholder}。这些将由智能体的`_fill_prompt_template`方法实例化。
311
+
312
+ ##### 3. 编写一个输出解析器
313
+
314
+ 下一步是为您的智能体的响应编写一个简单的解析器。因为您可能已经在您的提示模板中指定了输出格式,所以您需要提供一个相应的解析器。在此示例中,我们在我们的提示模板中通知模型以以下格式输出
315
+
316
+ ```
317
+ Action: Speak
318
+ Action Input: (the content)
319
+ ```
320
+
321
+ 我们将编写一个解析器来从智能体的响应中提取内容。有关更多详细信息,请参考代码。我们使用`@output_parser_registry.register('classroom_parser')`修饰我们的解析器函数,以将其注册到我们的框架中。最后,我们在`agentverse/tasks/__init__.py`中导入我们的解析器。
322
+
323
+ 通过这些步骤,我们已经成功地构建了一个简单的教室环境,并根据我们的需求进行了定制。
324
+
325
+ ### 更复杂环境的定制指南
326
+
327
+ 虽然我们提供了一个基本框架来构建环境,使用我们的五个规则组件,但更复杂的环境可能需要进一步的定制。详细的文档和教程即将推出。在此,我们简要介绍如何定制您的环境的一些步骤:
328
+
329
+ 1. **定制五个规则组件**。每个规则组件都有一个接口,允许您根据特定的需求定制其行为。需要注意的是,这些组件并不一定是独立的,可以通过环境中的`rule_params`字典进行交互。您可以创建自己的规则组件,并与现有的组件集成,以构建智能体之间更复杂的交互。
330
+ 2. **定制环境本身**。我们的`basic`环境为五个规则组件提供了一个默认的执行顺序,适合大多数情况,但您可以继承`BaseEnvironment`类并编写自己的`run`方法来实现更复杂的执行顺序。
331
+ 3. **定制智能体**。根据您的特定用例,您可能还需要继承`BaseAgent`类。例如,您可能希望使用您的本地LLM作为智能体,或创建具有专门知识或技能的智能体。
332
+
333
+ ## 🔎 示例
334
+
335
+ 目前,我们在`agentverse/tasks`目录中提供了一些简单的示例,每个示例都展示了我们框架的不同可能性。尽管这些示例的性能可能由于有限的提示工程而不是最佳的,但它们旨在展示我们框架的能力,例如允许使用工具。
336
+
337
+ 以下是每个示例的简要概述:
338
+
339
+ 1. `nlp_classroom_3players`:此示例说明了智能体将按顺序交谈的简单情况。
340
+ 2. `nlp_classroom_9players`:这是一个NLP课堂示例。在这里,学生们可以在有问题时举手,教授可以叫学生让他们提问。只有在被叫到之后,学生才被允许说话。
341
+ 3. `nlp_classroom_9players_group`:此示例展示了小组讨论。必要时,教授可以发起小组讨论,学生们可以在讨论期间只与同一小组的同学交互。
342
+ 4. `nlp_classroom_3players_withtool`:在这个课堂中,学生们在听课时可以使用Bing搜索API。
343
+ 5. `math_problem_2players_tools`:一个简单的示例,展示了如何使用WolframAlpha API的两个智能体来玩算术游戏。
344
+ 6. `prisoner_dilema`:囚犯困境是一个涉及两个理性智能体面临的思想实验,他们可以选择为相互利益而合作,或为个人利益而背叛伙伴。
345
+ 7. `db_diag`:首席DBA(智能体)监控数据库系统中的异常,并在检测到任何异常时提醒内存和CPU智能体。他们(智能体)分析根本原因并建议优化解决方案。首席DBA(智能体)��用户提供诊断摘要,用户可以给出指示或评估所提议的解决方案的有效性。
346
+ 8. `sde_team`:在SDE团队中,代码编写者、代码测试者和代码审查者在代码生成问题上进行合作。
347
+ 9. `pokemon`:此示例模仿宝可梦游戏。
348
+
349
+
350
+ ## Star History
351
+
352
+ [![Star History Chart](https://api.star-history.com/svg?repos=OpenBMB/AgentVerse&type=Date)](https://star-history.com/#OpenBMB/AgentVerse&Date)
353
+
354
+
355
+ ## Citation
356
+ 如果您在您的工作中使用了我们的框架,请使用以下形式进行引用
357
+ ```
358
+ @misc{chen2023agentverse,
359
+ title={AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents},
360
+ author={Weize Chen and Yusheng Su and Jingwei Zuo and Cheng Yang and Chenfei Yuan and Chen Qian and Chi-Min Chan and Yujia Qin and Yaxi Lu and Ruobing Xie and Zhiyuan Liu and Maosong Sun and Jie Zhou},
361
+ year={2023},
362
+ eprint={2308.10848},
363
+ archivePrefix={arXiv},
364
+ primaryClass={cs.CL}
365
+ }
366
+ ```
367
+
368
+ ## Contact
369
+
370
+ 陈纬泽: [email protected]
371
+
372
+ [苏裕胜](https://yushengsu-thu.github.io/): [email protected]
373
+