Built with Axolotl

See axolotl config

axolotl version: 0.12.2

# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
# 是否以 8-bit 精度加载模型
load_in_8bit: false
# 是否以 4-bit 精度加载模型(与QLoRA绑定, 强制使用)
load_in_4bit: false
# 是否严格匹配模型结构,关闭表示可加载少部分差异结构(如以适配 adapter)
# strict: false
base_model: Qwen/Qwen3-4B-Instruct-2507
# 数据集设置
chat_template: qwen3
datasets:
  - path: /workspace/train_data_0923/all_data.json # - 表示列表(list)中的一项, 即可以同时使用多个数据集
    type: chat_template # chat_template(自定义格式) alpaca
    roles_to_train: ["assistant"]
    field_messages: messages # 标识的字段
    message_property_mappings:  # message_property_mappings={'role':'role', 'content':'content'})
      role: role
      content: content
dataset_prepared_path:
val_set_size: 0.05
output_dir: /workspace/train_data_0923/checkpoints/0923
sequence_len: 16384 # 模型所能处理的最大上下文长度(默认2048)
pad_to_sequence_len: true
# context_parallel_size: 2 # 长序列拆分至多个GPU(强制要求 mirco_batch_size: 1)
sample_packing: false # 在训练时将多个样本拼接(packing)成一个长序列(sequence_len)输入到模型中,以提高训练效率。
eval_sample_packing: false # 评估时拼接多个样本
# 训练超参数
adapter: lora  # lora qlora
lora_model_dir:
lora_r: 16 # lora_r默认首选 16,平衡精度与显存
lora_alpha: 64 # 缩放系数,用于控制 LoRA 的影响力, 一般设为 2*r 或 4*r
lora_dropout: 0.05
lora_target_linear: true
micro_batch_size: 4 # 微批次大小 94G的H100可以设为4(Token为1w)
gradient_accumulation_steps: 2 # 梯度累积: 将多个微批次的梯度(micro_batch_size)累积起来,然后更新模型权重 有效 Batch 常取 16: 小于 8 训练会抖,大于 32 只会更耗时、收益有限
auto_find_batch_size: false # 允许Axolotl不断调整batch_size  ⚠️Zero-3不适用
num_epochs: 1
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 4e-5
# bf16: auto + tf32: true,可获得更好的稳定性和性能。
bf16: auto
tf32: true
# early_stopping_patience:
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
# auto_resume_from_checkpoints: true #自动从output_dir寻找最新checkpoint断点恢复
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
fsdp:
  - full_shard
  - auto_wrap
fsdp_config:
  fsdp_limit_all_gathers: true
  fsdp_sync_module_states: true
  fsdp_offload_params: false  # H200显存足够,无需offload
  fsdp_use_orig_params: false
  fsdp_cpu_ram_efficient_loading: true
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_transformer_layer_cls_to_wrap: Qwen3DecoderLayer
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_sharding_strategy: FULL_SHARD

workspace/train_data_0923/checkpoints/0923

This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 on the /workspace/train_data_0923/all_data.json dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0419
  • Memory/max Mem Active(gib): 128.99
  • Memory/max Mem Allocated(gib): 128.8
  • Memory/device Mem Reserved(gib): 130.65

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • total_eval_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 1472

Training results

Training Loss Epoch Step Validation Loss Mem Active(gib) Mem Allocated(gib) Mem Reserved(gib)
No log 0 0 1.0273 98.27 98.07 99.47
0.059 0.25 368 0.0510 128.99 128.8 130.32
0.0496 0.5 736 0.0456 128.99 128.8 130.65
0.0412 0.75 1104 0.0428 128.99 128.8 130.65
0.0576 1.0 1472 0.0419 128.99 128.8 130.65

Framework versions

  • PEFT 0.17.0
  • Transformers 4.55.2
  • Pytorch 2.6.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cjkasbdkjnlakb/agent-0923

Adapter
(105)
this model

Evaluation results