What's new? (April 2024)
Last updated
Last updated
For Tabnine's private installation release notes, click here
April 18, 2024
Highlights:
The users' page and the CSV-based report now show users with pending invitations:
Additionally, the admin can resend or revoke pending invitations:
End users of Tabnine are referred to help resources (web page and a support email). By default, they are referred to the Tabnine website or Tabnine's Support email (for example, in the Installation Instructions page).
Enterprise admins can now customize this and define internal resources that are specific to their organization or accessible to their users:
The team admin can now enable and configure SSO directly in the admin console instead of in the installation YAML:
When upgrading from an older version to the values in the existing YAML file will be migrated once to your installation.
Tabnine offers different AI models for Tabnine Chat. Tabnine Enterprise admins (in Tabnine SaaS) control which models are available to the users in their organization according to the organization.
We’re thrilled to unveil a powerful new capability that puts you in the driver’s seat when using Tabnine. Starting today, you can switch the underlying large language model (LLM) that powers Tabnine Chat at any time. In addition to the built-for-purpose Tabnine Protected model that we custom-developed for software development teams, you now have access to additional models from OpenAI and a new model that brings together the best of Tabnine and Mistral, the leading open source AI model provider.
You can choose from the following models with Tabnine’s AI software development chat tools:
Tabnine Protected: Tabnine’s original model is designed to deliver high performance without the risks of intellectual property violations or exposing your code and data to others.
Tabnine + Mistral: Tabnine’s newest offering is built to deliver the highest class of performance while still maintaining complete privacy.
GPT-3.5 Turbo and GPT-4.0 Turbo: The industry’s most popular LLMs are proven to deliver the highest performance for teams willing to share their data externally.
More importantly, you’re not locked into any one of these models. Switch instantly between models for specific projects, use cases, or to meet the requirements of specific teams. No matter which LLM you choose, you’ll always benefit from the full capability of Tabnine’s highly tuned AI agents.
The switchable chat models feature is currently available for Tabnine Chat users with a SaaS deployment, and is compatible with all IDEs supporting Tabnine Chat.