Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Some specified arguments are not used by the HfArgumentParser: ['vicuna'] #7021

Closed
1 task done
SovietLongbow opened this issue Feb 21, 2025 · 1 comment
Closed
1 task done
Labels
solved This problem has been already solved

Comments

@SovietLongbow
Copy link

Reminder

  • I have read the above rules and searched the existing issues.

System Info

`
(llama) root@autodl-container-30634997bd-fe761c7a:~/LLaMA-Factory# llamafactory-cli webchat
--model_name_or_path /root/autodl-tmp/swift/llava-1.5-7b-hf
--template vicuna
--infer_backend huggingface
[2025-02-21 08:02:58,991] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.1), only 1.0.0 is known to be compatible

Got unknown args, potentially deprecated arguments: ['vicuna']

ValueError: Some specified arguments are not used by the HfArgumentParser: ['vicuna']
`

Reproduction

Got unknown args, potentially deprecated arguments: ['vicuna']
Traceback (most recent call last):
  File "/root/miniconda3/envs/llama/bin/llamafactory-cli", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/cli.py", line 114, in main
    run_web_demo()
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/webui/interface.py", line 98, in run_web_demo
    create_web_demo().queue().launch(share=gradio_share, server_name=server_name, inbrowser=True)
    ^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/webui/interface.py", line 71, in create_web_demo
    engine = Engine(pure_chat=True)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/webui/engine.py", line 35, in __init__
    self.chatter = WebChatModel(self.manager, demo_mode, lazy_init=(not pure_chat))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/webui/chatter.py", line 44, in __init__
    super().__init__()
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/chat/chat_model.py", line 49, in __init__
    model_args, data_args, finetuning_args, generating_args = get_infer_args(args)
                                                              ^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/hparams/parser.py", line 371, in get_infer_args
    model_args, data_args, finetuning_args, generating_args = _parse_infer_args(args)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/hparams/parser.py", line 152, in _parse_infer_args
    return _parse_args(parser, args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/llama/lib/python3.11/site-packages/llamafactory/hparams/parser.py", line 70, in _parse_args
    raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {unknown_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['vicuna']

Others

在linux算力平台上部署了最新版的llamaFactory后,下载了模型llava1.5-7b,在参考文档进行webchat时出现了如下问题:
Exception occors when trying to use webchat on a linux machine:

参考了官方文档的推理部分
refering to the commands given in the documents

因为放置llamafactory的目录空间不足,所以指定了其他路径存放模型。
Models were places in different directory for not having enough space in default directory.

多模态模型

对于多模态模型,您可以运行以下指令进行推理。

llamafactory-cli webchat examples/inferece/llava1_5.yaml

examples/inference/llava1_5.yaml 的配置示例如下:

model_name_or_path: llava-hf/llava-1.5-7b-hf
template: vicuna
infer_backend: huggingface #choices: [huggingface, vllm]

请指点我
please support me。

@SovietLongbow SovietLongbow added bug Something isn't working pending This problem is yet to be addressed labels Feb 21, 2025
@BUAADreamer
Copy link
Collaborator

请clone最新的llamafactory安装

@hiyouga hiyouga added solved This problem has been already solved and removed bug Something isn't working pending This problem is yet to be addressed labels Feb 21, 2025
@hiyouga hiyouga closed this as completed Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

3 participants