vllm error

#1
by celsowm - opened
docker run -d \
>   --name gemma4-31b-fp8-32k \
>   --gpus '"device=6"' \
>   -p 8006:8006 \
>   -v /srv/models:/models \
>   vllm/vllm-openai:nightly \
>     --model /models/gemma-4-31B-it-FP8_BLOCK \
>     --host 0.0.0.0 \
>     --port 8006 \
>     --max-model-len 32768 \
>     --gpu-memory-utilization 0.60 \
>     --kv-cache-dtype fp8 \
>     --enable-prefix-caching \
>     --reasoning-parser gemma4 \
>     --tool-call-parser gemma4
f3684eae5c6b4d9eb9083354f65e64539d53a1fde28510370dce0094fb050b39
root@srv-ia-020:/home/fontesc# docker logs -f f3684eae5c6b4d9eb9083354f65e64539d53a1fde28510370dce0094fb050b39
WARNING 04-04 16:41:23 [argparse_utils.py:191] With `vllm serve`, you should provide the model as a positional argument or in a config file instead of via the `--model` option. The `--model` option will be removed in v0.13.
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:299]
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:299]        β–ˆ     β–ˆ     β–ˆβ–„   β–„β–ˆ
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:299]  β–„β–„ β–„β–ˆ β–ˆ     β–ˆ     β–ˆ β–€β–„β–€ β–ˆ  version 0.19.1rc1.dev29+g93726b2a1
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:299]   β–ˆβ–„β–ˆβ–€ β–ˆ     β–ˆ     β–ˆ     β–ˆ  model   /models/gemma-4-31B-it-FP8_BLOCK
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:299]    β–€β–€  β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€     β–€
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:299]
(APIServer pid=1) INFO 04-04 16:41:23 [utils.py:233] non-default args: {'model_tag': '/models/gemma-4-31B-it-FP8_BLOCK', 'tool_call_parser': 'gemma4', 'host': '0.0.0.0', 'port': 8006, 'model': '/models/gemma-4-31B-it-FP8_BLOCK', 'max_model_len': 32768, 'reasoning_parser': 'gemma4', 'gpu_memory_utilization': 0.6, 'kv_cache_dtype': 'fp8', 'enable_prefix_caching': True}
(APIServer pid=1) Traceback (most recent call last):
(APIServer pid=1)   File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=1)     sys.exit(main())
(APIServer pid=1)              ^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 75, in main
(APIServer pid=1)     args.dispatch_function(args)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 122, in cmd
(APIServer pid=1)     uvloop.run(run_server(args))
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1)     return __asyncio.run(
(APIServer pid=1)            ^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1)     return runner.run(main)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1)     return self._loop.run_until_complete(task)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1)     return await main
(APIServer pid=1)            ^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 684, in run_server
(APIServer pid=1)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 698, in run_server_worker
(APIServer pid=1)     async with build_async_engine_client(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 100, in build_async_engine_client
(APIServer pid=1)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 124, in build_async_engine_client_from_engine_args
(APIServer pid=1)     vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=1)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1574, in create_engine_config
(APIServer pid=1)     model_config = self.create_model_config()
(APIServer pid=1)                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1422, in create_model_config
(APIServer pid=1)     return ModelConfig(
(APIServer pid=1)            ^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=1)     s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=1) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
(APIServer pid=1)   Value error, The checkpoint you are trying to load has model type `gemma4` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
(APIServer pid=1)
(APIServer pid=1) You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git` [type=value_error, input_value=ArgsKwargs((), {'model': ...nderer_num_workers': 1}), input_type=ArgsKwargs]
(APIServer pid=1)     For further information visit https://errors.pydantic.dev/2.12/v/value_error

just add 'pip install --upgrade transformers' line before vllm serve, it will fix isue

Red Hat AI org

Hi @celsowm , please follow vllm install guide linked in model card -- https://docs.vllm.ai/projects/recipes/en/latest/Google/Gemma4.html#installing-vllm

Sign up or log in to comment