ValueError: Some specified arguments are not used by the HfArgumentParser: ['vicuna'] #7021
Closed
1 task done
Labels
solved
This problem has been already solved
Reminder
System Info
`
(llama) root@autodl-container-30634997bd-fe761c7a:~/LLaMA-Factory# llamafactory-cli webchat
--model_name_or_path /root/autodl-tmp/swift/llava-1.5-7b-hf
--template vicuna
--infer_backend huggingface
[2025-02-21 08:02:58,991] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.1), only 1.0.0 is known to be compatible
Got unknown args, potentially deprecated arguments: ['vicuna']
ValueError: Some specified arguments are not used by the HfArgumentParser: ['vicuna']
`
Reproduction
Others
在linux算力平台上部署了最新版的llamaFactory后,下载了模型llava1.5-7b,在参考文档进行webchat时出现了如下问题:
Exception occors when trying to use webchat on a linux machine:
参考了官方文档的推理部分
refering to the commands given in the documents
因为放置llamafactory的目录空间不足,所以指定了其他路径存放模型。
Models were places in different directory for not having enough space in default directory.
多模态模型
对于多模态模型,您可以运行以下指令进行推理。
llamafactory-cli webchat examples/inferece/llava1_5.yaml
examples/inference/llava1_5.yaml 的配置示例如下:
model_name_or_path: llava-hf/llava-1.5-7b-hf
template: vicuna
infer_backend: huggingface #choices: [huggingface, vllm]
请指点我
please support me。
The text was updated successfully, but these errors were encountered: