« Back to Glossary Index

Parameters in Language Model Learning (LLM) are internal settings learned by an AI model during training. They define the model’s behavior, influencing its understanding of language and ability to generate human-like text.

When we say a model has x “parameters per token,” it means that for each token, the model utilizes a certain number of these parameters to predict the next token. The more parameters a model has, the more complex and nuanced its understanding and generation of language can be, allowing it to make more accurate predictions and understandings based on the tokens it processes.

The architecture may contain millions or billions of these parameters. A special hyperparameter called “temperature” regulates the creativity of the AI’s responses, with higher values increasing diversity and lower values focusing on deterministic responses. Overall, LLM parameters are essential in shaping the AI’s linguistic abilities and effectiveness.

Note: B stand for billion -miljard in Dutch-.


LLM Generation parameters:

🔢 max_tokens: Limits the response length.
🔥 temperature: Controls creativity vs. focus.
🎯 top_p: Chooses from the top probability range.
🎲 top_k: Picks from the top-k options.
📉 frequency penalty: Reduces repetition of frequent tokens.
👀 presence penalty: Discourages repeating any token.
🛑 stop sequence: Defines where the response should end.

LLM Generation parameters in an easy Cheat-sheet :

« Back to Glossary Index