EdukaAI

Fine-Tuning Guide

Choose your preferred method to fine-tune a model using your EdukaAI dataset. Each method is a complete, self-contained guide from start to finish.

💡

Not sure which method to choose?

If you're new to fine-tuning, start with Ollama for instant results without training, or try Axolotl for actual model training with easy YAML configuration. Mac users should check out MLX for the fastest training on Apple Silicon.

🦎RECOMMENDED

Axolotl

Real Fine-Tuning

YAML-based configuration with true LoRA training. Complete model output (not just adapters). Best with cloud GPU.

⏱️ Training time: 5-30 min

🎮 GPU: Recommended

📊 Best for: Production models

View Complete Guide →
FASTEST

Unsloth

2x Speed, 70% Less VRAM

Optimized training with custom kernels. Train 7B models on RTX 3090. 500K context support. Free Colab notebooks.

⏱️ Training time: 2-15 min

🎮 GPU: NVIDIA required

📊 Best for: Speed & efficiency

View Complete Guide →
🍎MAC ONLY

MLX

Apple Silicon

Apple's machine learning framework with real LoRA training. Best performance on M1/M2/M3 Macs.

⏱️ Training time: 10-30 min

🎮 Requirements: Apple Silicon

📊 Best for: Mac users

View Complete Guide →
🤗OFFICIAL

TRL

HuggingFace Official

Native HuggingFace training library. Full control over the training loop. Supports DPO and RLHF.

⏱️ Training time: 5-30 min

🎮 GPU: NVIDIA recommended

📊 Best for: Learning internals

View Complete Guide →
☁️CLOUD

Hugging Face

Cloud Training

Upload your dataset and use AutoTrain or notebooks. Great for sharing models and collaboration.

⏱️ Training time: Varies

🎮 GPU: Cloud provided

📊 Best for: Sharing models

View Complete Guide →
📓FREE GPU

Google Colab

Free GPU Access

Free Jupyter notebooks with GPU access. Run training without any local setup.

⏱️ Training time: 15-45 min

🎮 GPU: Free T4 GPU

📊 Best for: Learning

View Complete Guide →
CPU ONLY

llama.cpp

CPU Optimized

Run GGUF models with excellent CPU performance. Perfect for machines without a GPU.

⏱️ Setup time: 5 minutes

🎮 GPU: Not needed

📊 Best for: Low-resource systems

View Complete Guide →
🦙EASIEST

Ollama

No Training Required

Embed your data in a Modelfile for instant custom models. No training time - works immediately!

⏱️ Setup time: 2 minutes

🎮 GPU: Not needed

📊 Best for: Quick testing

View Complete Guide →

⚖️ Quick Comparison

MethodTrainingGPUSpeedBest For
Axolotl✅ Real LoRARecommended5-30 minProduction use
Unsloth✅ Real LoRARequired2-15 min (2x)Speed & efficiency
MLX✅ Real LoRAApple SiliconFastMac users
Hugging Face✅ MultipleCloudVariesSharing
Colab✅ RealFree T415-45 minLearning
llama.cpp⚡ Context❌ NoFast CPULow-resource
Ollama⚡ Instant❌ NoInstantQuick testing

📚 Key Concepts

True Fine-Tuning (Axolotl, MLX, HuggingFace, Colab)

Actually modifies the model's weights using LoRA/QLoRA. Model learns patterns permanently.

Context-Based (llama.cpp, Ollama)

Embeds training data in system prompt. No actual training, but model learns from examples on the fly.

LoRA (Low-Rank Adaptation)

Efficient fine-tuning method that adds small trainable layers instead of modifying all weights.

GGUF Format

Quantized model format for efficient CPU inference. Used by Ollama and llama.cpp.

💡 General Tips

Start Small: Use 1B-3B parameter models for testing. They're fast and use less memory.

Quality Over Quantity: 10 great examples beat 100 mediocre ones.

Test First: Try Ollama or llama.cpp before investing time in full training.

Iterate: Test, improve examples, re-export, test again.

Format Matters: Alpaca format works with almost all tools.

Save Your Work: Always keep backups of good training datasets.

🎓

New to fine-tuning?

Learn the fundamentals: LoRA, adapters, quantization, GGUF format, and what all those terms mean. Perfect for technically-skilled beginners.

Learn Fundamentals →
📊

Just trained a model?

Learn how to evaluate your fine-tuned model: automated metrics, human evaluation, A/B testing, and real-world validation. Know if your model actually learned what you taught it.

Evaluation Guide →
🚀

Ready to deploy?

Complete guide to deploying your fine-tuned LLM: from running locally to serving millions of requests. Local, cloud, managed APIs, and serverless options covered.

Deployment Guide →

High-throughput serving?

Deploy with vLLM for 24x faster inference. Perfect for production APIs with multiple concurrent users. OpenAI-compatible API.

vLLM Guide →
🎯

Already trained a model?

Learn what you actually get (adapters, not a model file) and how to use, share, or convert your fine-tuned model to work with Ollama and other tools.

Using Your Model →