RNet AI
  • INTRODUCTION
    • About Rnet
    • Quickstart
    • Core Concepts
      • List of Model Providers
  • GETTING STARTED
    • Model
      • Add New Provider
      • Predefined Model Integration
      • Custom Model Integration
      • Interfaces
      • Schema
    • Application Orchestration
      • Overview
      • Interactive Chat Application
      • Agent
      • App Kits
    • Workflow
      • Core Concepts
      • Node Overview
        • Start
        • Question Classifier
        • Knowledge Retrieval
        • Variable Aggregator
        • LLM
        • Direct Reply
        • IF/ELSE
        • HTTP Request
        • End
    • Knowledge Base
Powered by GitBook
On this page
  • Definition
  • How to Configure
  • Explanation of Special Variables
  • Context Variables
  • Advanced Features
  1. GETTING STARTED
  2. Workflow
  3. Node Overview

LLM

PreviousVariable AggregatorNextDirect Reply

Last updated 9 months ago

Definition

Utilizes a large language model for answering questions or processing natural language.

In this case, LLM are used to reorganize retrieved relevant knowledge in order to respond to user questions.

How to Configure

Configuration Steps:

  1. Select a Model: Rnet supports major global models, including OpenAI's GPT series, Anthropic's Claude series, and Google's Gemini series. Choosing a model depends on its inference capability, cost, response speed, context window, etc. You need to select a suitable model based on the scenario requirements and task type.

  2. Configure Model Parameters: Model parameters control the generation results, such as temperature, TopP, maximum tokens, response format, etc. To facilitate selection, the system provides three preset parameter sets: Creative, Balanced, and Precise.

  3. Write Prompts: The LLM node offers an easy-to-use prompt composition page. Selecting a chat model or completion model will display different prompt composition structures.

  4. Advanced Settings: You can enable memory, set memory windows, and use the Jinja-2 template language for more complex prompts.

If you are using Rnet for the first time, you need to complete the model configuration in System Settings—Model Providers before selecting a model in the LLM node.

Explanation of Special Variables

Context Variables

Context variables are a special type of variable defined within the LLM node, used to insert externally retrieved text content into the prompt.

In this case, the downstream node of Variable Aggregator is the LLM node. The output variable Group1.output needs to be configured in the context variable within the LLM node for association and assignment. After association, inserting the context variable at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt.

If the context variable is associated with a common variable from an upstream node, such as a string type variable from the start node, the context variable can still be used as external knowledge, but the citation and attribution feature will be disabled.

Conversation History

This variable is carried over to the LLM node in Chatflow, used to insert chat history between the AI and the user into the prompt, helping the LLM understand the context of the conversation.

The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow.

Advanced Features

Memory: When enabled, each input to the intent classifier will include chat history from the conversation to help the LLM understand the context and improve question comprehension in interactive dialogues.

Memory Window: When the memory window is closed, the system dynamically filters the amount of chat history passed based on the model's context window; when open, users can precisely control the amount of chat history passed (in terms of numbers).

Conversation Role Name Settings: Due to differences in model training stages, different models adhere to role name instructions differently, such as Human/Assistant, Human/AI, Human/Assistant, etc. To adapt to the prompt response effects of multiple models, the system provides conversation role name settings. Modifying the role name will change the role prefix in the conversation history.

Jinja-2 Templates: The LLM prompt editor supports Jinja-2 template language, allowing you to leverage this powerful Python template language for lightweight data transformation and logical processing.