Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WARN llama-server <chat> exited with status code 1 #3512

Open
Mte90 opened this issue Dec 5, 2024 · 7 comments
Open

WARN llama-server <chat> exited with status code 1 #3512

Mte90 opened this issue Dec 5, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@Mte90
Copy link

Mte90 commented Dec 5, 2024

Describe the bug
I am trying to run tabby but I get:

WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:98: llama-server <chat> exited with status code 1, args: `Command { std: "//tabby_x86_64-manylinux2014-cuda122/llama-server" "-m" "/home/mte90/.tabby/models/TabbyML/Mistral-7B/ggml/model-00001-of-00001.gguf" "--cont-batching" "--port" "30892" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--chat-template" "<s>{% for message in messages %}{% if (message[\'role\'] == \'user\') != (loop.index0 % 2 == 0) %}{{ raise_exception(\'Conversation roles must alternate user/assistant/user/assistant/...\') }}{% endif %}{% if message[\'role\'] == \'user\' %}{{ \'[INST] \' + message[\'content\'] + \' [/INST]\' }}{% elif message[\'role\'] == \'assistant\' %}{{ message[\'content\'] + \'</s> \' }}{% else %}{{ raise_exception(\'Only user and assistant roles are supported!\') }}{% endif %}{% endfor %}", kill_on_drop: true }`

Information about your version
0.21

Ideally when there is this output tabby should exit and not still trying to start and generate the command to try with the same parameter or atleast a log output so it is possible to investigate as copy and paste that output doesn't work as it is escaped.

@wsxiaoys wsxiaoys added enhancement New feature or request and removed bug-unconfirmed labels Dec 5, 2024
@gtozzi
Copy link

gtozzi commented Dec 5, 2024

Same here. Also tried 0.20

@JamesNewton
Copy link

Same. Version 0.23.

@wsxiaoys
Copy link
Member

it's likely the CPU of the host doesn't support avx2 - thus cause the issue. We shall at least provide proper error log for such case.

Filing #3694

@JamesNewton
Copy link

it's likely the CPU of the host doesn't support avx2 - thus cause the issue. We shall at least provide proper error log for such case.

My system has Intel© Core™ i7-4700HQ which Intel says does have the avx2 feature:
https://www.intel.com/content/www/us/en/products/sku/75116/intel-core-i74700hq-processor-6m-cache-up-to-3-40-ghz/specifications.html

Note that this error only appears for me if I try to use the -device vulkan option. It works fine without that; on the CPU only.

@zwpaper
Copy link
Member

zwpaper commented Jan 15, 2025

Hi @JamesNewton, you might already be aware, but just to ensure clarity, running Tabby with Vulkan requires a Vulkan setup on your system. Can you confirm that this is in place?

If so, could you please provide more details about your system, such as the GPU, OS, and other relevant specifications?

@JamesNewton
Copy link

JamesNewton commented Jan 16, 2025

Linux Mint 21.3 Cinnamon. I have a NVIDIA GeForce GTX 860M GPU and I'm running the 550.120 driver (latest) which apparently comes with the vulkan driver.

apt install libvulkan1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libvulkan1 is already the newest version (1.3.204.1-2).

The really weird thing now is that even when I try to start it with no device or with --device cpu, it still errors out complaining that the vulkan device doesn't support 16-bit storage. e.g.

./tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cpu
⠸     5.850 s	Starting...2025-01-16T03:44:36.251809Z  WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:98: llama-server <embedding> exited with status code -1, args: `Command { std: "/home/jamesnewton/apps/tabby/llama-server" "-m" "/home/jamesnewton/.tabby/models/TabbyML/Nomic-Embed-Text/ggml/model-00001-of-00001.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }`
2025-01-16T03:44:36.251835Z  WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:110: <embedding>: MESA-INTEL: warning: Haswell Vulkan support is incomplete
2025-01-16T03:44:36.251842Z  WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:110: <embedding>: ggml_vulkan: Found 1 Vulkan devices:
2025-01-16T03:44:36.251847Z  WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:110: <embedding>: Vulkan0: Intel(R) HD Graphics 4600 (HSW GT2) (Intel open-source Mesa driver) | uma: 1 | fp16: 0 | warp size: 32
2025-01-16T03:44:36.251852Z  WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:110: <embedding>: ggml_vulkan: device Vulkan0 does not support 16-bit storage.

Note (again) this is with --device cpu. And this WAS working at least once. No idea what's going on.

@Mte90
Copy link
Author

Mte90 commented Jan 29, 2025

In my case:

grep avx /proc/cpuinfo

Returns

20:flags                : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
48:flags                : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
76:flags                : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
104:flags               : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities

So in my case avx2 is supported on my machine.

If I run
tabby/llama-server "-m" "/home/mte90/.tabby/models/TabbyML/Mistral-7B/ggml/model-00001-of-00001.gguf" "--cont-batching" "--port" "30892" "-np" "1" "--ctx-size" "4096" "-ngl" "9999"

Reporth this:

llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 512.00 MiB on device 0: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
common_init_from_params: failed to create context with model '/home/mte90/.tabby/models/TabbyML/Mistral-7B/ggml/model-00001-of-00001.gguf'
srv    load_model: failed to load model, '/home/mte90/.tabby/models/TabbyML/Mistral-7B/ggml/model-00001-of-00001.gguf'
main: exiting due to model loading error

So probably is just a matter of memory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants