We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
求助 mac本用llama-factory采用GPU(MPS)微调训练
以下是我的参数,但只能用cpu微调训练: ··· llamafactory-cli train --stage sft --do_train True --model_name_or_path /Users/brody/ai/huggingface/llms/Qwen2.5-0.5B-Instruct-root/Qwen2.5-0.5B-Instruct --preprocessing_num_workers 16 --finetuning_type full --template qwen --flash_attn auto --dataset_dir /Users/brody/ai/huggingface/dataset/ruozhiba --dataset datasets --cutoff_len 512 --learning_rate 5e-05 --num_train_epochs 5.0 --max_samples 100000 --per_device_train_batch_size 8 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 100 --warmup_steps 0 --packing False --report_to none --output_dir /Users/brody/ai/huggingface/dataset/ruozhiba/saves --plot_loss True --trust_remote_code True --ddp_timeout 180000000 --include_num_input_tokens_seen True --optim adamw_torch --val_size 0.2 --eval_strategy steps --eval_steps 100 --per_device_eval_batch_size 8 --no_cuda true --bf16 False ···
mac本用llama-factory如何用MPS ?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Reminder
Description
求助 mac本用llama-factory采用GPU(MPS)微调训练
Pull Request
以下是我的参数,但只能用cpu微调训练:
···
llamafactory-cli train
--stage sft
--do_train True
--model_name_or_path /Users/brody/ai/huggingface/llms/Qwen2.5-0.5B-Instruct-root/Qwen2.5-0.5B-Instruct
--preprocessing_num_workers 16
--finetuning_type full
--template qwen
--flash_attn auto
--dataset_dir /Users/brody/ai/huggingface/dataset/ruozhiba
--dataset datasets
--cutoff_len 512
--learning_rate 5e-05
--num_train_epochs 5.0
--max_samples 100000
--per_device_train_batch_size 8
--gradient_accumulation_steps 4
--lr_scheduler_type cosine
--max_grad_norm 1.0
--logging_steps 5
--save_steps 100
--warmup_steps 0
--packing False
--report_to none
--output_dir /Users/brody/ai/huggingface/dataset/ruozhiba/saves
--plot_loss True
--trust_remote_code True
--ddp_timeout 180000000
--include_num_input_tokens_seen True
--optim adamw_torch
--val_size 0.2
--eval_strategy steps
--eval_steps 100
--per_device_eval_batch_size 8
--no_cuda true
--bf16 False
···
mac本用llama-factory如何用MPS ?
The text was updated successfully, but these errors were encountered: