Model
Last updated
Last updated
Rnet supports a wide range of models tailored for different tasks, including text generation, embedding creation, search optimization, and speech processing. To get started with Rnet, you'll first need to configure the LLMs that will power your application. Simply navigate to Settings > Model Providers to add and set up the models you'll be using.
Before integrating these models into Rnet, make sure to obtain the necessary API key from the provider's official website. Once you have the API key, you can easily configure the model within Rnet to start leveraging its powerful AI features in your application.
Rnet classifies models into 4 types, each for different uses:
System Inference Models
Used in applications for tasks like chat, name generation, and suggesting follow-up questions.
OpenAI, Azure OpenAI Service, Anthropic, Hugging Face Hub, Replicate, Xinference, OpenLLM, iFLYTEK SPARK, WENXINYIYAN, TONGYI, Minimax, ZHIPU(ChatGLM) Ollama, LocalAI.
Embedding Models
Employed to embed segmented documents in knowledge and process user queries in applications
OpenAI, ZHIPU (ChatGLM), Jina AI(Jina Embeddings 2)
Rerank Models
Enhance search capabilities in LLMs
Cohere
Speech-to-Text Models
Convert spoken words to text in conversational applications.
OpenAI
Rnet automatically chooses the default model based on how it is used; you can adjust this setting under Settings > Model Provider.
Rnet supports major model providers like OpenAI’s GPT series and Anthropic’s Claude series. Each model’s capabilities and parameters differ, so select a model provider that suits your application’s needs. You can obtain the API key from the model provider’s official website before using it in Rnet.
Here, we’ll use OpenAI’s API key as an example.Using an API key allows us to choose from more models.
Once the API Key is correctly configured, these models are ready for application use