Models
Tabnine provides various AI models for Tabnine Chat. Tabnine Enterprise admins can connect Tabnine to their internal endpoints to enrich the Tabnine Chat experience for users. This allows administrators to integrate their company's private LLM instances, making them accessible to the engineering team directly within Tabnine.
Note
Tabnine's code completions only use the Tabnine Universal code completions model, which is both private and protected.
Private Model Endpoints Supported by Tabnine
Note
The list of supported models is updated frequently as new models become available.
Currently, Tabnine supports the following model providers for private endpoint connections:

Claude 4.6 Sonnet
✓
✓

Claude 4.6 Opus
✓
✓

Claude 4.5 Sonnet
✓
✓

Claude 4.5 Opus
✓
✓

Claude 4.5 Haiku
✓
✓

Claude 4 Sonnet
✓
✓

Gemini 3.0 Pro
✓

Gemini 2.5 Flash
✓

Gemini 2.5 Pro
✓
![]()
GPT-5.2
✓
✓
![]()
GPT-5
✓
✓
![]()
GPT-4o
✓
✓

Devstral-Small-2-24B-Instruct-2512
✓

Devstral-2-123B-Instruct-2512
✓

MiniMax 2.5
✓

Qwen-3-Coder-480B-A35B-Instruct
✓

Qwen-3-30B (Chat only)
✓
Integration Requirements for Models
Amazon Bedrock: Region, Access Key ID, Secret Access Key | Learn more
Azure: Azure endpoint, Key, Deployment ID | Learn more
OpenAI: Key, OpenAI endpoint | Learn more
OpenAI-Compatible: Llama endpoint, OpenAICompatible Model name
Google Vertex AI: Region, Project ID, Service Account | Learn more
Viewing and setting up the available chat models
Note\
If this functionality isn't visible, we recommend contacting your dedicated account manager at Tabnine. They'll assist you in setting the available chat AI models for your team.
Admins manage the available chat models for their accounts and set up private endpoints for chat models:
Sign in to the Tabnine console as an admin.
Go to the Models page under Settings:

Toggle a model on and enter the relevant provider settings for your private endpoint.


Setting the default chat model
Admins can set a model as the account's default chat model. The account default model is the default chat model for the account users. However, users can still switch to any other available model.
Change the account default model by expanding a specific model and toggling Set as default.

Change the account default model by expanding a specific model and toggling Set as default.
Bring Your Own AI (BYOAI) – Self‑Managed Models
Bring Your Own AI (BYOAI) lets you connect your own specialized Large Language Models (LLMs) to Tabnine for specific use cases, keep data within your own infrastructure, and configure everything in a self-serve approach. This also enables hybrid deployments with on-premise and cloud models.
Adding a self‑managed (BYOAI) model

Tabnine supports OpenAI‑compatible providers for self‑managed models. This includes any provider that exposes an API compatible with the OpenAI format.
To add a new BYOAI model, go to the Models page, then the Self-Managed tab, and click Add model.
In the Provider settings dialog, select Use OpenAI Compatible as a provider for a new model.

Fill in the following fields:
Endpoint (required):- The base URL of your OpenAI‑compatible API endpoint.
Key: The API key used to authenticate requests to your provider. This key is stored securely and used only for requests from your organization.
OpenAI Compatible Model name (required): The model identifier as defined by your provider.
Certificate Authority: Path to a custom certificate authority (CA) bundle if your endpoint uses a private or internal CA. Leave blank if you are using a public certificate authority.
Ignore Self Signed Certificate: Enable this option if your endpoint uses a self‑signed certificate and you want Tabnine to bypass certificate validation. Use this only for internal/testing environments, as it relaxes TLS validation.
Max Tokens Per Request: Upper limit for the total tokens (prompt + response) per request. This helps prevent unexpectedly large or expensive prompts being sent to your model.
Max Response Tokens: Maximum number of tokens that the model is allowed to generate in a response. This controls the length of model outputs and can help manage latency and cost.
Click Save to create the model.
Once saved, the new self‑managed model appears in the Self-Managed models list.
Last updated
Was this helpful?
