Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use remote ai model for Tabby itself #3764

Open
itforxp opened this issue Jan 26, 2025 · 9 comments
Open

use remote ai model for Tabby itself #3764

itforxp opened this issue Jan 26, 2025 · 9 comments
Labels
documentation Improvements or additions to documentation

Comments

@itforxp
Copy link

itforxp commented Jan 26, 2025

I have rent GPU server and want point local installed Tabby to that server. Is it possible at the moment?
Something like :
export TABBY_BACKEND_LLAMA=http://<REMOTE_MODEL_URL>
export TABBY_BACKEND_AUTHORIZATION="Bearer <YOUR_API_KEY>"
docker run -it -p 8080:8080 -v $HOME/.tabby:/data registry.tabbyml.com/tabbyml/tabby server --remote


Please reply with a 👍 if you want this feature.

@itforxp itforxp added the enhancement New feature or request label Jan 26, 2025
@wsxiaoys
Copy link
Member

Hi - https://tabby.tabbyml.com/docs/references/models-http-api/llama.cpp/ contains an example connecting tabby to a remote model HTTP server

@itforxp
Copy link
Author

itforxp commented Jan 27, 2025

thanks. how do i run tabby to use the remote model instead of the local one. the current tabby binary options force me to run the local model

@itforxp
Copy link
Author

itforxp commented Jan 28, 2025

also i see :

~/.tabby/config.toml

[model.completion.http]
kind = "llama.cpp/completion"
api_endpoint = "http://localhost:8888"
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>" 

how can i provide Bearer Token to authorize on remote ollama side?

@wsxiaoys
Copy link
Member

You could set api_key field for authorization. https://tabby.tabbyml.com/docs/references/models-http-api/openai/ contains relevant examples.

@itforxp
Copy link
Author

itforxp commented Jan 29, 2025

Thanks! But what about : how to disable local tabby's served ai model? I tried to kill local tabby model, but it restarted rapidly

@wsxiaoys
Copy link
Member

Tabby initiates three default models if they are not configured remotely. For more information, please refer to our documentation at: https://tabby.tabbyml.com/docs/administration/model/

Could you please confirm if you have set up all three models?

@itforxp
Copy link
Author

itforxp commented Feb 4, 2025

Image
i still have only local models

@wsxiaoys wsxiaoys added documentation Improvements or additions to documentation and removed enhancement New feature or request labels Feb 8, 2025
@wsxiaoys
Copy link
Member

Please ensure that you have correctly configured the ~/.tabby/config.toml file, and verify that your tabby command is utilizing config.toml.

@10373064
Copy link

10373064 commented Feb 19, 2025

Please ensure that you have correctly configured the ~/.tabby/config.toml file, and verify that your tabby command is utilizing config.toml.

I'm having the same problem, been going through the problem all afternoon and still can't find a solution.
I have finished reading the relevant documents
The configuration does not seem to be taking effect
Using tabby_x86_64-windows-msvc 0.24
Configuration in config.toml.

[logs]
level = “debug” # “silent” or “error” or “debug”

[model.chat.http]
kind = “openai/chat”
model_name = “qwen2.5-coder-14b-instruct”
api_endpoint = “https://dashscope.aliyuncs.com/compatible-mode/4”
api_key = “sk-xxxx”

[model.completion.http]
kind = “openai/completion”
model_name = “qwen2.5-coder-14b-instruct”
api_endpoint = “https://dashscope.aliyuncs.com/compatible-mode/v1”
api_key = “sk-xxxx”

Start command . \tabby.exe serve

How to verify that the configuration in config.toml is working using

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants