Welcome! This documentation will guide you through fine-tuning LLaMA using LoRA on a personal dataset.
You can find and use the trained LoRA adapter layers for this project on Hugging Face:
Hemanthchallapalli/lora-llama2-about-me (Hugging Face Model Hub)
You can run the full LoRA fine-tuning workflow in Google Colab:
Before you begin, install the required Python packages in your Colab or local environment:
!pip install --upgrade transformers accelerate bitsandbytes
!pip install bert-score sentence-transformers matplotlib
!pip install rouge_score evaluate nltk
Optional: For experiment tracking, you can use Weights & Biases (W&B). To log in, run:
import wandb
wandb.login() # Enter your W&B token when prompted
Hardware / Runtime
Google Colab GPU (A100/V100 recommended)
Minimum 12–16GB RAM
Here I used T4 GPU
Next: Proceed to Setup for environment and model preparation.