RLOO Training Test: Ministral (Text-Only)¶
Tests Reinforcement Learning with Leave-One-Out (RLOO) optimization with Unsloth on Ministral-3B using text-only mode.
Model Variant: Text-only (FastLanguageModel) Expected Result: Testing - Ministral is multimodal architecture
Key features tested:
- FastLanguageModel loading with 4-bit quantization
- LoRA adapter configuration
- RLOOTrainer with synthetic reward function
- Post-training inference verification
RLOO Overview: RLOO uses leave-one-out baseline estimation for variance reduction in policy gradients. For each completion, the baseline is computed as the mean reward of all other completions, providing more stable training than single-sample estimates.
Key Differences from Qwen:
- Uses
unsloth/Ministral-3-3B-Reasoning-2512(multimodal architecture) - Chat template uses multimodal format:
{"type": "text", "text": "..."}
Important: This notebook includes a kernel shutdown cell at the end to release all GPU memory.
In [1]:
Copied!
# Environment Setup
import os
# FIX: Set ACCELERATE_MIXED_PRECISION BEFORE importing unsloth
os.environ['ACCELERATE_MIXED_PRECISION'] = 'bf16'
from dotenv import load_dotenv
load_dotenv()
# Force text-based progress instead of HTML widgets
os.environ["TQDM_NOTEBOOK"] = "false"
# CRITICAL: Import unsloth FIRST for proper TRL patching
import unsloth
from unsloth import FastLanguageModel, is_bf16_supported
import torch
from trl import RLOOConfig, RLOOTrainer
from datasets import Dataset
# Environment summary
gpu = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU"
print(f"Environment: unsloth {unsloth.__version__}, PyTorch {torch.__version__}, {gpu}")
print(f"ACCELERATE_MIXED_PRECISION: {os.environ.get('ACCELERATE_MIXED_PRECISION', 'not set')}")
print(f"HF_TOKEN loaded: {'Yes' if os.environ.get('HF_TOKEN') else 'No'}")
# Environment Setup import os # FIX: Set ACCELERATE_MIXED_PRECISION BEFORE importing unsloth os.environ['ACCELERATE_MIXED_PRECISION'] = 'bf16' from dotenv import load_dotenv load_dotenv() # Force text-based progress instead of HTML widgets os.environ["TQDM_NOTEBOOK"] = "false" # CRITICAL: Import unsloth FIRST for proper TRL patching import unsloth from unsloth import FastLanguageModel, is_bf16_supported import torch from trl import RLOOConfig, RLOOTrainer from datasets import Dataset # Environment summary gpu = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU" print(f"Environment: unsloth {unsloth.__version__}, PyTorch {torch.__version__}, {gpu}") print(f"ACCELERATE_MIXED_PRECISION: {os.environ.get('ACCELERATE_MIXED_PRECISION', 'not set')}") print(f"HF_TOKEN loaded: {'Yes' if os.environ.get('HF_TOKEN') else 'No'}")
Out[1]:
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
Out[1]:
/opt/pixi/.pixi/envs/default/lib/python3.13/site-packages/trl/__init__.py:203: UserWarning: TRL currently supports vLLM versions: 0.10.2, 0.11.0, 0.11.1, 0.11.2. You have version 0.14.0rc1.dev201+gadcf682fc.cu130 installed. We recommend installing a supported version to avoid compatibility issues. if is_vllm_available():
Out[1]:
🦥 Unsloth Zoo will now patch everything to make training faster!
Out[1]:
Environment: unsloth 2025.12.10, PyTorch 2.9.1+cu130, NVIDIA GeForce RTX 4080 SUPER ACCELERATE_MIXED_PRECISION: bf16 HF_TOKEN loaded: Yes
In [2]:
Copied!
# Load Ministral-3B with 4-bit quantization (using FastLanguageModel for text-only)
MODEL_NAME = "unsloth/Ministral-3-3B-Reasoning-2512"
print(f"\nLoading {MODEL_NAME.split('/')[-1]} with FastLanguageModel (text-only mode)...")
model, tokenizer = FastLanguageModel.from_pretrained(
MODEL_NAME,
max_seq_length=512,
load_in_4bit=True,
dtype=None,
)
# Ensure pad token is set
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
print(f"Model loaded: {type(model).__name__}")
# Load Ministral-3B with 4-bit quantization (using FastLanguageModel for text-only) MODEL_NAME = "unsloth/Ministral-3-3B-Reasoning-2512" print(f"\nLoading {MODEL_NAME.split('/')[-1]} with FastLanguageModel (text-only mode)...") model, tokenizer = FastLanguageModel.from_pretrained( MODEL_NAME, max_seq_length=512, load_in_4bit=True, dtype=None, ) # Ensure pad token is set if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id print(f"Model loaded: {type(model).__name__}")
Out[2]:
Loading Ministral-3-3B-Reasoning-2512 with FastLanguageModel (text-only mode)...
Out[2]:
==((====))== Unsloth 2025.12.10: Fast Ministral3 patching. Transformers: 5.0.0rc1. vLLM: 0.14.0rc1.dev201+gadcf682fc.cu130. \\ /| NVIDIA GeForce RTX 4080 SUPER. Num GPUs = 1. Max memory: 15.568 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.9.1+cu130. CUDA: 8.9. CUDA Toolkit: 13.0. Triton: 3.5.1 \ / Bfloat16 = TRUE. FA [Xformers = 0.0.33.post2. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Out[2]:
Loading weights: 0%| | 0/458 [00:00<?, ?it/s]
Out[2]:
Model loaded: Mistral3ForConditionalGeneration
In [3]:
Copied!
# Apply LoRA adapters for RLOO training
model = FastLanguageModel.get_peft_model(
model,
r=16,
lora_alpha=16,
lora_dropout=0,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
bias="none",
use_gradient_checkpointing="unsloth",
random_state=42,
)
trainable = sum(p.numel() for p in model.parameters() if p.requires_grad)
total = sum(p.numel() for p in model.parameters())
print(f"LoRA applied: {trainable:,} trainable / {total:,} total ({100*trainable/total:.2f}%)")
# Apply LoRA adapters for RLOO training model = FastLanguageModel.get_peft_model( model, r=16, lora_alpha=16, lora_dropout=0, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], bias="none", use_gradient_checkpointing="unsloth", random_state=42, ) trainable = sum(p.numel() for p in model.parameters() if p.requires_grad) total = sum(p.numel() for p in model.parameters()) print(f"LoRA applied: {trainable:,} trainable / {total:,} total ({100*trainable/total:.2f}%)")
Out[3]:
Unsloth: Making `model.base_model.model.model.vision_tower.transformer` require gradients
Out[3]:
LoRA applied: 33,751,040 trainable / 2,160,030,720 total (1.56%)
In [4]:
Copied!
# Create minimal synthetic prompt dataset for RLOO (5 prompts)
# Using Ministral's multimodal chat format for text-only content
prompts = [
"Explain the concept of recursion in programming.",
"What are the benefits of using version control?",
"Describe how a hash table works.",
"What is the difference between a stack and a queue?",
"Explain what an API is to a beginner.",
]
# Format prompts for RLOO using Ministral's multimodal format
dataset = Dataset.from_dict({
"prompt": [
tokenizer.apply_chat_template(
[{"role": "user", "content": [{"type": "text", "text": p}]}],
tokenize=False,
add_generation_prompt=True
) for p in prompts
]
})
print(f"Dataset created: {len(dataset)} prompts")
# Create minimal synthetic prompt dataset for RLOO (5 prompts) # Using Ministral's multimodal chat format for text-only content prompts = [ "Explain the concept of recursion in programming.", "What are the benefits of using version control?", "Describe how a hash table works.", "What is the difference between a stack and a queue?", "Explain what an API is to a beginner.", ] # Format prompts for RLOO using Ministral's multimodal format dataset = Dataset.from_dict({ "prompt": [ tokenizer.apply_chat_template( [{"role": "user", "content": [{"type": "text", "text": p}]}], tokenize=False, add_generation_prompt=True ) for p in prompts ] }) print(f"Dataset created: {len(dataset)} prompts")
Out[4]:
Dataset created: 5 prompts
In [5]:
Copied!
# Define a simple reward function for testing
def simple_reward_fn(completions, prompts=None, **kwargs):
"""Simple reward function for testing RLOO."""
rewards = []
for completion in completions:
length = len(completion.split())
score = 0.0
if 10 <= length <= 50:
score += 1.0
elif length < 10:
score -= 0.5
if completion.strip().endswith("."):
score += 0.5
rewards.append(score)
return rewards
print("Reward function defined: simple_reward_fn")
# Define a simple reward function for testing def simple_reward_fn(completions, prompts=None, **kwargs): """Simple reward function for testing RLOO.""" rewards = [] for completion in completions: length = len(completion.split()) score = 0.0 if 10 <= length <= 50: score += 1.0 elif length < 10: score -= 0.5 if completion.strip().endswith("."): score += 0.5 rewards.append(score) return rewards print("Reward function defined: simple_reward_fn")
Out[5]:
Reward function defined: simple_reward_fn
In [ ]:
Copied!
# RLOO Training Configuration (minimal steps for testing)
rloo_config = RLOOConfig(
output_dir="outputs_rloo_ministral_text_test",
per_device_train_batch_size=4,
gradient_accumulation_steps=1,
max_steps=2,
warmup_steps=0,
learning_rate=1e-5,
logging_steps=1,
fp16=not is_bf16_supported(),
bf16=is_bf16_supported(),
optim="adamw_8bit",
num_generations=4,
max_completion_length=64,
beta=0.05,
seed=42,
)
print("Starting RLOO training (2 steps)...")
try:
trainer = RLOOTrainer(
model=model,
args=rloo_config,
train_dataset=dataset,
processing_class=tokenizer,
reward_funcs=simple_reward_fn,
)
trainer_stats = trainer.train()
print(f"RLOO training completed!")
RLOO_TEXT_SUPPORTED = True
except Exception as e:
print(f"RLOO training failed: {e}")
RLOO_TEXT_SUPPORTED = False
# RLOO Training Configuration (minimal steps for testing) rloo_config = RLOOConfig( output_dir="outputs_rloo_ministral_text_test", per_device_train_batch_size=4, gradient_accumulation_steps=1, max_steps=2, warmup_steps=0, learning_rate=1e-5, logging_steps=1, fp16=not is_bf16_supported(), bf16=is_bf16_supported(), optim="adamw_8bit", num_generations=4, max_completion_length=64, beta=0.05, seed=42, ) print("Starting RLOO training (2 steps)...") try: trainer = RLOOTrainer( model=model, args=rloo_config, train_dataset=dataset, processing_class=tokenizer, reward_funcs=simple_reward_fn, ) trainer_stats = trainer.train() print(f"RLOO training completed!") RLOO_TEXT_SUPPORTED = True except Exception as e: print(f"RLOO training failed: {e}") RLOO_TEXT_SUPPORTED = False
In [ ]:
Copied!
# Post-training inference test
FastLanguageModel.for_inference(model)
test_prompt = "What is machine learning?"
messages = [{"role": "user", "content": [{"type": "text", "text": test_prompt}]}]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(None, input_text, add_special_tokens=False, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Clean up BPE artifacts from Ministral tokenizer (Ġ=space, Ċ=newline)
response = response.replace('Ġ', ' ').replace('Ċ', '\n').strip()
# Clear success/failure banner
print("=" * 60)
if RLOO_TEXT_SUPPORTED:
print("RLOO Training: SUPPORTED for Ministral (Text-Only)")
print("Model: FastLanguageModel + Ministral-3-3B-Reasoning-2512")
else:
print("RLOO Training: NOT SUPPORTED for Ministral (Text-Only)")
print("Reason: See error above")
print("=" * 60)
print(f"Sample generation:\n{response[-200:]}")
# Post-training inference test FastLanguageModel.for_inference(model) test_prompt = "What is machine learning?" messages = [{"role": "user", "content": [{"type": "text", "text": test_prompt}]}] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(None, input_text, add_special_tokens=False, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=64, temperature=0.7, top_p=0.9, do_sample=True, pad_token_id=tokenizer.pad_token_id, ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Clean up BPE artifacts from Ministral tokenizer (Ġ=space, Ċ=newline) response = response.replace('Ġ', ' ').replace('Ċ', '\n').strip() # Clear success/failure banner print("=" * 60) if RLOO_TEXT_SUPPORTED: print("RLOO Training: SUPPORTED for Ministral (Text-Only)") print("Model: FastLanguageModel + Ministral-3-3B-Reasoning-2512") else: print("RLOO Training: NOT SUPPORTED for Ministral (Text-Only)") print("Reason: See error above") print("=" * 60) print(f"Sample generation:\n{response[-200:]}")
Test Complete¶
The RLOO Training Pipeline test for Ministral (Text-Only) has completed. The kernel will now shut down to release all GPU memory.
What Was Verified¶
- FastLanguageModel loading with 4-bit quantization (Ministral-3B)
- LoRA adapter configuration for RL training
- Synthetic prompt dataset with Ministral's multimodal format
- Simple reward function integration
- RLOOTrainer training loop (2 steps)
- Post-training inference generation
RLOO Concepts Demonstrated¶
- Leave-One-Out Baseline: Each completion's baseline is mean of other K-1 rewards
- Variance Reduction: More stable gradients than single-sample estimates
- KL Penalty: Prevents policy from diverging too far from reference
Next Steps¶
- Compare with
07_RLOO_Training_Ministral_Vision.ipynbfor vision RLOO
In [8]:
Copied!
# Shutdown kernel to release all GPU memory
import IPython
print("Shutting down kernel to release GPU memory...")
app = IPython.Application.instance()
app.kernel.do_shutdown(restart=False)
# Shutdown kernel to release all GPU memory import IPython print("Shutting down kernel to release GPU memory...") app = IPython.Application.instance() app.kernel.do_shutdown(restart=False)
Out[8]:
Shutting down kernel to release GPU memory...
Out[8]:
{'status': 'ok', 'restart': False}