Air-gapped deployment guide

When installing Tabnine in an air-gapped environment, you will need to manually provision the docker images into the Kubernetes cluster that will host Tabnine’s enterprise services. This guide will demonstrate how to obtain the images from our registry and show two examples of provisioning the images to the Kubernetes cluster - The recommended way is using organization internal docker registry or, for single-node environments, by side-loading the images into the Kubernetes host.

Requirements

This guide goes over the air-gapped installation of TabNine services. It assumes you have already installed your cluster, including GPU support, database access, and other requirements from the installation instructions.

1. Obtaining images for installation/upgrade

When installing Tabnine in a non-air-gapped environment, you would just run helm install with some values. When air-gapped, you will run helm template with the same values, extract the images used, and download them to your machine.

There are many ways to achieve this. For this guide, we will use jq to extract the image name and docker to pull the images from the registry and export them.

Obtain the helm chart

Download the helm chart locally

helm pull oci://registry.tabnine.com/self-hosted/tabnine-cloud

You will find the helm chart in your current directory, in a file named tabnine-cloud-X.X.X.tgz where X.X.X is the version of the chart.

Obtain the images list (images.list)

In order to obtain the images please run

helm template tabnine-cloud-X.X.X.tgz \
-n tabnine   --values values.yaml \
| yq '..|.image? | select(.)' |sort -u\
 | grep registry.tabnine.com > images.list

Download the images

You need at least 512GB of available disk space to download the images

First, you will need to login to our registry server. Use the same credentials that the installation guide instructs you to use for the pull secret.

docker login registry.tabnine.com
xargs -a images.list -I image docker pull image

Export the images

docker save $(cat images.list) > images.tar.gz

The resulting images.tar.gz will contain all the images required to run Tabnine on an air-gapped environment, copy the file, and the images.list file.

2. Importing the images into the cluster

The process of import process changes between organizations, and whatever you are using organization internal registry server or side-loading the images directly into the Kubernetes node. The steps are to be executed from within the air gapped environment once tabnine-cloud-X.X.X.tgz, images.tar.gz and images.list are present on the executing machine

Import into another registry server (option 1)

This is the recommended setup as it allows cluster nodes to pull images from a centralized location within your company rather than having the images loaded into the cluster. This allows more accessible updates and works better when your cluster consists of multiple nodes and in cloud environments where the cloud provider may switch the underlying node as part of the cluster’s scaling or maintenance.

The internal registry can be a docker registry or cloud registry, such as Google’s Artifact registry, Amazon’s Elastic Container Registry (ECR), etc. If your cluster is cloud-based, access to the cloud provider’s native registry is already set up for you.

For the following steps, it is assumed that you have docker installed on the machine and permissions to docker push to the target registry, and that the Kubernetes cluster has permissions to pull images from the target registry. For this guide we will assume that the target registry name is target.registry and the namespace used for Tabnine’s artifacts will be tabnine . In orther words, we assume that all of our images will be hosted under target.registry/tabnine . Please update the name to one that matches your organization.

Import the images into the local docker

docker load --input images.tar.gz

Rename and push images

When using another repo we need to rename the docker images to the target repo and push them. For this we will use a small script. Download the file and save it as push_images.sh

#!/bin/bash
set -e
images_file=$1
target_repo=$2
for img in $(cat $images_file); do
	target_image=$(echo $img | sed -e s#registry.tabnine.com/private#${target_repo}#g | sed -e s#registry.tabnine.com/public#${target_repo}#g)
	docker tag $img $target_image 
	docker rmi $img
	docker push $target_image
	docker rmi $target_image
	echo "$img pushed to $target_image"
done

And then run

sh push_images.sh images.list target.registry/tabnine

Once completed, it is recommended to clean up your docker installation to ensure the spaced used by the images is freed

docker system prune

Edit your values file

As you are about to use different registry, you will need to update your values.yaml to reflect the changes, the the global.image values to your server. (see complete values.yaml params in installation instructions)

global:
  image:
    registry: target.registry
    baseRepo: tabnine
    privateRepo: tabnine
    # If the cluster needs image pull secret to pull the image, put it here, otherwise
    # leave an empty array
    imagePullSecrets: []

Side load into the Kubernetes host (option 2)

If you are using a single-node Kubernetes server, you can side-load the images directly to the server. For microk8s the following command will import the images exported in the first step into the server

 sudo microk8s ctr images import images.tar.gz

Edit your values file

As the images are already on the server, you won’t need to use a pull secret in order to obtain the images. (see complete values.yaml params in installation instructions)

global:
  image:
    imagePullSecrets: []

3. Install Tabnine helm chart

Once the images are available to the Kubernetes server, the next step is to install Tabnine’s helm chart. This assumes you have tabnine-cloud-X.X.X.tgz and values.yaml files from the previous step.

You will still need to create the namespace and certificate as instructed in the installation instruction before running helm upgrade.

helm upgrade   tabnine  tabnine-cloud-X.X.X.tgz \
--install -n tabnine --create-namespace --values values.yaml

Last updated