DAG 编排规划

这个例子是通过AWEL DAG编排来展示如何通过大模型来构建对话应用的例子, 例子中的核心功能上根据用户输入构造大模型参数然后实现与大模型对话的应用。所以整个编排有如下几步:

  • 发起Http请求
  • 处理请求内容
  • 大模型推理
  • 解析模型输出结果

画板

同样的,MapOperatorModelOperator算子是DB-GPT内置算子,所以可以直接引用来使用。

  1. from dbgpt._private.pydantic import BaseModel, Field
  2. from dbgpt.core import ModelMessage, ModelRequest
  3. from dbgpt.core.awel import DAG, HttpTrigger, MapOperator
  4. from dbgpt.model.operator import LLMOperator

自定义算子

同样的,我们需要自定义一个处理用户请求来构造模型输入参数的算子。 首先定义用户请求参数,参数里面需要传两个内容,一个是模型名 model,另一个是用户输入信息 user_input

  1. class TriggerReqBody(BaseModel):
  2. model: str = Field(..., description="Model name")
  3. user_input: str = Field(..., description="User input")

根据请求参数来构造模型推理参数,自定义一个RequestHandleOperator的算子,此算子继承了MapOperator算子,通过重写map方法,即可实现参数的构造。

  1. class RequestHandleOperator(MapOperator[TriggerReqBody, ModelRequest]):
  2. def __init__(self, **kwargs):
  3. super().__init__(**kwargs)
  4. async def map(self, input_value: TriggerReqBody) -> ModelRequest:
  5. messages = [ModelMessage.build_human_message(input_value.user_input)]
  6. print(f"Receive input value: {input_value}")
  7. return ModelRequest.build_request(input_value.model, messages)

DAG编排

编写好算子之后,下一步即可进行算子的编排,通过AWEL DAG进行算子编排。 trigger >> request_handle_task >> llm_task >> model_parse_task

  1. with DAG("dbgpt_awel_simple_dag_example") as dag:
  2. # Receive http request and trigger dag to run.
  3. trigger = HttpTrigger(
  4. "/examples/simple_chat", methods="POST", request_body=TriggerReqBody
  5. )
  6. request_handle_task = RequestHandleOperator()
  7. llm_task = LLMOperator(task_name="llm_task")
  8. model_parse_task = MapOperator(lambda out: out.to_dict())
  9. trigger >> request_handle_task >> llm_task >> model_parse_task

测试验证

同前面的教程,我们有两种方式进行验证。

  1. 启动dbgpt_server python dbgpt/app/dbgpt_server.py
  2. 通过测试环境来启动
  1. if __name__ == "__main__":
  2. if dag.leaf_nodes[0].dev_mode:
  3. # Development mode, you can run the dag locally for debugging.
  4. from dbgpt.core.awel import setup_dev_environment
  5. setup_dev_environment([dag], port=5555)
  6. else:
  7. # Production mode, DB-GPT will automatically load and execute the current file after startup.
  8. pass
  1. DBGPT_SERVER="http://127.0.0.1:5555"
  2. curl -X POST $DBGPT_SERVER/api/v1/awel/trigger/examples/simple_chat \
  3. -H "Content-Type: application/json" -d '{
  4. "model": "proxyllm",
  5. "user_input": "hello"
  6. }'
  7. {"text":"Hello! How can I assist you today?","error_code":0,"model_context":{"prompt_echo_len_char":-1,"has_format_prompt":false},"finish_reason":null,"usage":null,"metrics":{"collect_index":10,"start_time_ms":1704436285946,"end_time_ms":1704436293358,"current_time_ms":1704436293358,"first_token_time_ms":null,"first_completion_time_ms":1704436292905,"first_completion_tokens":null,"prompt_tokens":null,"completion_tokens":null,"total_tokens":null,"speed_per_second":null,"current_gpu_infos":null,"avg_gpu_infos":null}}

:::danger ⚠️注意: 测试端口跟启动端口保持一致

:::