# Tabnine's Prompting Guide

## **Defining a Prompting Strategy for Code Projects**

Every project is going to require a different strategy based on your priorities. Effective prompting starts with structured thinking.

In Tabnine, you’re not just prompting a text model, but an Agent that has access to your codebase, open files, and custom commands.

A good prompt doesn’t just describe what you want but instructs the Agent *how* to operate in your project: what it can look at, what rules to follow, and what to leave alone.

***

#### **A. Specify the Key Elements**

When designing a prompt, clarify the following:

1. **The Task:**
   * What are you trying to achieve? What’s the ‘goal state’? What should the result be?
   * *Example*: "Apply the extract method for refactoring to this code."
2. **Context:**
   * What is the source material you’re working from, and where should the agent look (open file, specific path, remote repo)?
   * *Example*: "@related\_class" or "See the implementation in `utils/helpers.py`."
3. **Constraints**: What boundaries or conditions must be respected?
   * *Example*: "Keep current behavior. Preserve function signatures."

| Note: It is best to avoid *negative prompting*. A negative version of the above example would be "Do not change behavior. Preserve function signatures." |
| -------------------------------------------------------------------------------------------------------------------------------------------------------- |

4. **Process:** What are the instructions, or how should the agent approach the work step-by-step?
   * *Example*: "Start by identifying duplicate logic, then suggest reusable methods."
5. **Validation**: What signals indicate that the prompt was successful?
   * *Example*: "Refactored code should pass all existing tests."
6. **Format**: How should the output be structured?
   * *Example*: "Respond with a JSON object containing `method_name`, `start_line`, and `new_code`."
7. **(Re-)Iteration Plan**: What are the next steps if the first output isn’t perfect? Treat this like directing a junior engineer: tell the agent when to pause, ask questions, or request approval before applying large changes.
   * *Example*: "After generating the refactor, ask: 'Do you want to apply this to other classes?'"
8. **Ask for Feedback** *(optional)*:
   * *Example*: "If this approach looks flawed, explain why and propose an alternative."

***

**Template Example**

```
Goal: Apply extract method refactoring to the selected function
Source: @related_class
Constraints:
  - Do not change behavior
  - Maintain interface compatibility
Process:
  - Identify duplicate code blocks
  - Suggest method names based on action
Validation: Must retain all unit test coverage
Format: List of changes in JSON:
  [
    {
      "method_name": "...",
      "start_line": ...,
      "new_code": "..."
    }
  ]
Next Step: Ask if similar changes should be applied elsewhere
Feedback: Flag any logic that may not refactor cleanly
```

***

**When using Tabnine as an Agent over your codebase, be explicit about scope and permissions:**

* **Scope: “Only modify this file and test files in `/tests/user/`.”**
* **Permissions: “Do not create new services; only change existing methods.”**
* **Safety checks: “Before proposing changes, summarize what you understood about the current behavior.”**

**Why This Works**

A prompting strategy aligns the LLM with your development mindset. Instead of fishing for the right phrasing, you define a clear lane for the model to operate in—resulting in better, faster, and more predictable completions.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.tabnine.com/main/getting-started/tabnines-prompting-guide.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
