Comprehensive definitions for AI, LLMs, and fine-tuning terminology
Computer systems that can perform tasks requiring human-like intelligence.
Algorithms that improve through experience without explicit programming.
Machine learning using neural networks with multiple layers.
Computing systems inspired by biological brains, composed of interconnected nodes.
Neural network architecture using attention mechanisms, powering modern LLMs.
Technique allowing models to focus on relevant input parts when generating output.
AI trained on massive text data to understand and generate human language.
The basic unit of text that LLMs process (words, subwords, or characters).
The maximum amount of text (in tokens) a model can process at once.
Adapting a pre-trained model to specific tasks or domains with additional training.
Efficient fine-tuning method that trains small adapter layers instead of all model weights.
The process of teaching a model by adjusting its weights to minimize prediction errors.
One complete pass through the entire training dataset.
The pre-trained foundation model used as starting point for fine-tuning.
A structured collection of training examples used to teach AI models.
When an AI generates plausible but false or nonsensical information.
Settings that control the training process and model behavior.
The input or task description in a training example (what the user asks).
A measure of how wrong the model's predictions are during training.
When a model memorizes training data instead of learning general patterns.
The desired response or answer in a training example.
The text input you send to an AI to get a response.
Data held back from training to test if the model generalizes well.