Sổ Tay AI
ky-thuat Intermediate

What is Fine-tuning?

The process of further training an existing AI model on your data so it gets better at a specific task or matches a specific style.

Updated: May 2, 2026 · 1 min read

Fine-tuning is the practice of taking a pre-trained AI model (like GPT, Llama, or Claude) and continuing to train it on your own data so it absorbs your style, domain knowledge, or output format.

Quick comparison: Fine-tuning vs RAG vs Prompting

FactorPromptingRAGFine-tuning
Setup costCheapMediumExpensive
Update dataInstantInstantRe-train
Best forGeneral tasksDocument-grounded Q&AConsistent style/format

When to use fine-tuning

  • You need consistent brand voice across all output
  • You need a specific output format (complex JSON schema)
  • You have lots of high-quality domain examples that prompting + RAG can’t match

When NOT to use fine-tuning

  • You’re getting started — try prompting and RAG first
  • Your data changes constantly — RAG fits better
  • Budget < $1000 and no ML engineer

Lightweight alternatives

  • LoRA — fine-tune with 100-1000× less memory
  • Prompt caching — feed long context cheaply if RAG is too complex
Tags
#fine-tuning#training