This is a simplified guide to an AI model called Meta-Llama-3.1-405b-Instruct maintained by Meta. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.
Model overview
The meta-llama-3.1-405b-instruct
is Meta's flagship 405 billion parameter language model, fine-tuned for chat completions. It is part of a family of similar models from Meta, including the meta-llama-3-70b-instruct, meta-llama-3-8b-instruct, llama-2-7b-chat, llama-2-13b-chat, and llama-2-70b-chat models. These models span a range of parameter sizes and are tailored for different chat and completion tasks.
Model inputs and outputs
The meta-llama-3.1-405b-instruct
model takes a variety of inputs, including:
Inputs
- Prompt: The text prompt to generate completions for
- System Prompt: A system prompt that helps guide the model's behavior
- Top K: The number of highest probability tokens to consider for generating the output
- Top P: A probability threshold for generating the output
- Min Tokens: The minimum number of tokens the model should generate as output
- Max Tokens: The maximum number of tokens the model should generate as output
- Temperature: The value used to modulate the next token probabilities
- Presence Penalty: Presence penalty
- Frequency Penalty: Frequency penalty
- Stop Sequences: A comma-separated list of sequences to stop generation at
The model outputs an array of generated text.
Capabilities
The meta-llama-3.1-405b-instruct
mod...
Click here to read the full guide to Meta-Llama-3.1-405b-Instruct
Top comments (0)