How to load lora weights

#17
by wren93 - opened

Hello, it seems that based on the code in the latest commit, the lora weights are not actually loaded (the model is directly performing 8 step inference without loading the lora, only with quantization). When I switch to a history commit that uses optimize_pipeline_() to load lora weights, based on this part of the code:

pipeline.load_lora_weights(
            "Kijai/WanVideo_comfy", 
            weight_name="Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors", 
            adapter_name="lightx2v"
        )

I got the following error:

Loading adapter weights from state_dict led to unexpected keys found in the model: condition_embedder.image_embedder.ff.net.0.proj.lora_A.lightx2v.weight, condition_embedder.image_embedder.ff.net.0.proj.lora_B.lightx2v.weight, condition_embedder.image_embedder.ff.net.0.proj.lora_B.lightx2v.bias, condition_embedder.image_embedder.ff.net.2.lora_A.lightx2v.weight, condition_embedder.image_embedder.ff.net.2.lora_B.lightx2v.weight, condition_embedder.image_embedder.ff.net.2.lora_B.lightx2v.bias, blocks.0.attn2.add_k_proj.lora_A.lightx2v.weight, blocks.0.attn2.add_k_proj.lora_B.lightx2v.weight, blocks.0.attn2.add_k_proj.lora_B.lightx2v.bias, blocks.0.attn2.add_v_proj.lora_A.lightx2v.weight, blocks.0.attn2.add_v_proj.lora_B.lightx2v.weight, blocks.0.attn2.add_v_proj.lora_B.lightx2v.bias, blocks.1.attn2.add_k_proj.lora_A.lightx2v.weight, blocks.1.attn2.add_k_proj.lora_B.lightx2v.weight, blocks.1.attn2.add_k_proj.lora_B.lightx2v.bias, blocks.1.attn2.add_v_proj.lora_A.lightx2v.weight, blocks.1.attn2.add_v_proj.lora_B.lightx2v.weight, blocks.1.attn2.add_v_proj.lora_B.lightx2v.bias, blocks.2.attn2.add_k_proj.lora_A.lightx2v.weight, 
...

I 'm wondering if this is because the lora weights are for Wan 2.1 instead of Wan 2.2. Is there a way to solve this issue?

Sign up or log in to comment