Leap uses Anthropic’s Claude 4 Sonnet and Google’s Gemini models to power its AI. These models process tokens, which is defined by Anthropic as:

The smallest individual units of a language model. A token can represent a word, part of a word, a character, or even a byte (especially in the case of Unicode).

Each token comes with a cost, and Leap is billed by the AI providers for each token it uses. This is why our paid plans are priced based on how many tokens you get in each plan.

How Leap uses tokens

Tokens are consumed in three main ways when using Leap:

  • Messages between you and the AI.
  • The AI writing new code for your application.
  • The AI reading and analyzing your existing codebase, including any changes you’ve made.

How Leap optimizes token usage

We’re always working to make Leap consume tokens in the most efficient way, and are investing heavily in building smart logic and advanced tools for optimizing what context is passed to the AI and have it produce only relevant output.

This already includes strategies like:

  • Only including relevant chat history and change history from the current open Change, not all historical context.
  • Controlling the AI output and focusing it on writing functional and efficient code.

Tips to reduce your token usage

Here are some simple ways you can reduce token usage and improve performance:

Merge early and often

When you first send a prompt in Leap, it creates a Change for the ongoing chat with the AI. You can think of this as a branch.

Whenever you have accomplished an atomic implementation in your application, we recommend clicking Merge change and continuing with a new one. You can think of this as merging a pull request.

Merging early and often ensures the context passed to the AI is focused only on what is relevant for the changes you are currently working on.

Use Scope context

If you have a large codebase, manually defining what part of the codebase is send to the AI can be used to significantly optimize token usage. Click Scope context in the prompt text area to select which files and folders are relevant to the change you are working on.

Example: If you are asking the AI to only make UI changes on a specific page, use the Scope context feature to select the relevant page.

Avoid repeated “Fix with Leap” attempts

Repeatedly clicking Fix with Leap can quickly consume unnecessary tokens. After an attempt, review the changes and refine your next request if needed. Not every issue can be fixed automatically, without providing more context to the AI, so it’s often worth debugging the error or making manual edits if fixes fail.

Add error handling and logging

If you’re stuck in an error loop, prompt Leap to add detailed logging and stronger error handling. Leap is good at inserting useful logs at key steps, and these logs help the AI understand what’s going wrong and allow it to respond with more precise fixes in future attempts.

Use the Undo functionality

If something goes wrong, you can Undo the latest revision and return to the previous state without consuming any tokens. This is more efficient than asking the AI revert the change.

Undoing a revision is permanent, there’s no redo, so make sure you’re ready before using it.

Use the Discard change functionality

If you’ve made multiple revisions in a Change and you’re not happy with the result, you can revert to the latest Merged change by clicking Discard change on the currently active Change. This returns your codebase to the previously merged state without consuming any tokens. This is more efficient than asking the AI revert the latest changes.

Discarding a change is permanent, there’s no un-discard, so make sure you’re ready before using it.