Chat Completion
Leverage advanced AI models from Mistral AI to perform complex tasks such as categorization, analysis, summarization, or decision support.
For more information on Mistral AI's API, see Mistral AI Chat Completion (opens in a new tab).
SDK Import:
from admyral.actions import mistralai_chat_completion
Arguments:
Argument Name | Description | Required |
---|---|---|
Model model | The model to use for the chat completion (e.g., open-mistral-7B ). | Yes |
Prompt prompt | The input prompt to use for generating the chat completion. | Yes |
Top P top_p | Value between 0 and 1 for nucleus sampling. Only tokens with the top P probability mass are considered. Recommended to tweak this or temperature, but not both. | - |
Temperature temperature | Sampling temperature, between 0 and 2. Higher values (e.g., 0.8) increase randomness, while lower values (e.g., 0.2) make output more deterministic. Recommended to adjust this or Top P, but not both. | - |
Max Tokens max_tokens | The maximum number of tokens to generate for the completion. | - |
Returns
A string.
Required Secrets
Secret Placeholder | Description |
---|---|
MISTRALAI_SECRET | Mistral AI secret. See Mistral AI setup |
SDK Example
response = mistralai_chat_completion(
model="open-mixtral-8x22b",
prompt="Why is the ocean salty?",
secrets={
"MISTRALAI_SECRET": "my_stored_mistralai_secret"
}
)