The Tabnine client runs as an IDE plugin/extension on the end user's machine.
Machine specs
OS/Arch of the following:
Windows (Windows 10+), x86_64 or i686
Linux (kernel 6.2+), x86_64
Mac OS (12+), x86_64 or aarch64
16 GB+ RAM
8+ CPU cores
Storage: 100 GB available space
Supported IDEs
IDE
Minimal supported version
Latest supported version
Windows OS
Mac OS
Linux OS
VS Code
1.85
1.108
JetBrains IDEs*
2023.3
2025.3
Eclipse
4.28 (2023-06)
4.38 (2025-12)
Visual Studio 2022
17.10
17.14
Visual Studio 2026 (beta β)
-
-
* JetBrains IDEs including IntelliJ, PyCharm, WebStorm, PhpStorm, GoLand, RubyMine, CLion, AppCode, Rider, DataGrip, and Android Studio
Network connection
Connection to the Tabnine cluster on port 443
Recommended for the initial install: Access to the IDE marketplaces (i.e., VS Code
Marketplace, JetBrains Plugin Marketplace)
Permissions
Execute permissions for the following executables:
TabNine
TabNine-deep-local
TabNine-deep-cloud
WD-TabNine
TabNine-server-runner
vdb
jdtls
typescript-language-server
Write and execute permissions for the following machine paths:
Linux: ~/.config & ~/.tabnine
Mac OS: /Users/{{username}}/Library/Preferences & /Users/{{username}}/Library/Application Support
Windows: C:\Users\{{username}}\AppData\Roaming\
Tabnine Deployment Options
Tabnine can be deployed in one of the following ways:
Single/Multi-Tenant SaaS
Private cloud / On-prem installation using private API endpoints
Private cloud / On-prem installation using open-weight models
Single/Multi-Tenant SaaS
This deployment allows you to utilize Tabnine’s private LLM endpoints to support both Chat and Agentic workflows.
Models
These utilize the following families of LLMs for both Chat and Agent:
GPT
Claude
Gemini
Hardware Requirements
None.
Private Cloud / On-Prem Installation Using Private API Endpoints
You can install Tabnine on any of the leading private clouds (AWS, Azure, GCP) as well with an on-prem Kubernetes deployment while utilizing your own private endpoints to power Agentic workflows and the Chat model.
Models
These utilize the following families of LLMs for both Chat and Agent:
GPT
Claude
Gemini
Hardware Requirements
Tabnine requires a single GPU to support the software processes.; we recommend installing Tabnine on one H100 GPU.
Private Cloud / On-Prem Installation Using Open-Weight Models
You can also power Tabnine by supporting open-weight models that are installed on-premises or on one of the private clouds mentioned above.
Models
For Self-Hosted (SH) customers, your hardware needs depend on whether or not you already have any open-weight models within your infrastructure.
Tabnine-Supported Open-Weight Models
Devstral-Small-2-24B-Instruct-2512
Devstral-2-123B-Instruct-2512
MiniMax-M2.1
GPT-OSS-120B
GLM-4.7
Qwen-3-Coder-480B-A35B-Instruct
Qwen-3-30B (Chat only)
If not, we will install one of the following models on-premises for you:
Open-Weight Models that Tabnine Offers to Install On-Prem
Devstral-Small-2-24B-Instruct-2512
Devstral-2-123B-Instruct-2512
MiniMax-M2.1
Hardware Requirements
There are different installation requirements, aimed to make sure users have the optimal experience when using Tabnine. Those requirements will be different for Agentic workflows or Chat.
Agent + Chat
Agent + Chat
≤100 Users — Recommended
≤100 Users — Minimal
101-500 Users — Recommended
101-500 Users — Minimal
501-1000 Users — Recommended
501-1000 Users — Minimal
1001-2000 Users — Recommended
1001-2000 Users — Minimal
Devstral-Small-2-24B-Instruct-2512
2 B200
2 H100
2 B200
3 H100
4 B200
6 H100
8 B200
12 H100
Devstral-2-123B-Instruct-2512
4 B200
4 H100
8 B200
8 H100
16 B200
8 B200
24 B200
16 B200
MiniMax-M2.1
2 B200
2 H200
4 B200
4 H200
8 B200
8 H200
16 B200
16 H200
GPT-OSS-120B
2 B200
2 H100
2 B200
2 H100
2 B200
4 H100
4 B200
8 H100
GLM-4.7
2 B200
8 H100
4 B200
2 B200
8 B200
4 B200
16 B200
8 B200
Qwen-3-Coder-480B-A35B-Instruct
2 B200
8 H100
4 B200
2 B200
8 B200
4 B200
16 B200
8 B200
Chat Only
Chat Only
≤100 Users — Recommended
≤100 Users — Minimal
101-500 Users — Recommended
101-500 Users — Minimal
501-1000 Users — Recommended
501-1000 Users — Minimal
1001-2000 Users — Recommended
1001-2000 Users — Minimal
Devstral-Small-2-24B-Instruct-2512
2 B200
2 H100
2 B200
2 H100
2 B200
2 H100
2 B200
4 H100
Devstral-2-123B-Instruct-2512
2 B200
4 H100
2 B200
4 H100
4 B200
8 H100
8 B200
16 H100
MiniMax-M2.1
2 B200
2 H 200
2 B200
2 H200 /
4 H100
2 B200
4 H200 /
8 H100
3 B200
8 H200
GPT-OSS-120B
2 B200
2 H100
2 B200
2 H100
2 B200
2 H100
2 B200
3 H100
GLM-4.7
2 B200
8 H100
2 B200
2 B200
4 B200
2 B200
6 B200
4 B200
Qwen-3-Coder-480B-A35B-Instruct
2 B200
8 H100
2 B200
8 H100
4 B200
4 B200
8 B200
8 B200
Qwen-3-30B
2 B200
2 H100
2 B200
2 H100
2 B200
2 H100
2 B200
2 H100
GPU Availability by Cloud Provider
GPU
AWS
Azure
GCP
H100
p5.4xlarge (H100 80GB)
NC40ads_H100_v5 (H100 94GB)
a3-highgpu-1g (H100 80GB)
H200
p5en.48xlarge (8×H200 141GB)
ND96isr_H200_v5 (8×H200 141GB)
a3-ultragpu-8g (8×H200 141GB)
B200
p6-b200.48xlarge (8×B200 HBM3e)
ND128isr_NDR_GB200_v6 (4×Blackwell 192GB)
a4-highgpu-8g (8×B200 HBM3e)
If you don’t have an open-weight model that is not on the list, contact us and our team will work with you.
Optional Features / Additional Requirements
Provenance and Attribution:
Storage: 5 TB available space
Domain Name System (DNS)
DNS configured with an A or CNAME record for the load balancer where the application
will be exposed.
TLS Certificate
TLS certificate and private key issued and signed by a certificate authority that you trust (key and certificate in PEM format).
Network Connection
Connection to Tabnine container registry:
Host: registry.tabnine.com
IP: 34.72.243.185
Port: 443
Connection to Tabnine logs gateway for collecting metrics and logs (optional):
Host: logs-gateway.tabnine.com
IP: 34.123.33.186
Port: 443
Databases
Databases
Redis version 6.5+
PostgreSQL version 15.0+
Kubernetes
On-Premises Kubernetes ​Tabnine Enterprise can be installed on a new or existing Kubernetes cluster. For customers installing on a brand new Kubernetes cluster, we recommend the following minimum hardware specifications for the Kubernetes control-plane only (non-inclusive of Tabnine requirements).
Specs (Per Node)
HA
Non-HA
Number of Nodes
3
1
CPU
4 CPU
4 CPU
Memory
16 GB
16 GB
Disk
256 GB SSD
256 GB SSD
Network
1 GbE
1 GbE
Operating System
RHEL or Ubuntu
RHEL or Ubuntu
On-Premises
Specs (Minimum)
1 - 200 Users
201 - 500 Users
501-1000 Users
1001-2000 Users
2000+ Users
CPU
64
64
72
72
96
Memory
144 GB
144 GB
192 GB
192 GB
256 GB
Disk
10 TB SSD
10 TB SSD
16 TB SSD
16 TB SSD
32 TB SSD
On-Prem Hybrid allows for a connection to external models for the main LLM, including Claude Sonnet, ChatGPT, Gemini, etc.