How to Use PearAI IDE

Explore PearAI's AI-powered IDE designed to optimize your workflow with tools like context providers, autocomplete, and seamless integration with external services. Learn how PearAI helps improve coding efficiency and productivity.

Emmanuel Mumba

Emmanuel Mumba

21 June 2025

How to Use PearAI IDE

PearAI is a powerful AI-driven Integrated Development Environment (IDE) that simplifies coding with its seamless AI integration. It is an excellent alternative to traditional editors like VS Code and Cursor, offering built-in AI features without requiring additional extensions. In this guide, we'll walk you through how to install and use PearAI effectively.

💡
Before diving into the PearAI AI IDE, check out Apidog—a free tool designed to simplify API testing and integration. With Apidog’s intuitive interface, you can easily debug and optimize your API workflows, streamlining the development process and saving you valuable time. Whether you’re building APIs or troubleshooting issues, Apidog has everything you need to enhance your workflow.
button

Getting Started: Installing PearAI

Download the Setup

Installing PearAI

Customizing the Interface

Features and Functionalities

1. VS Code-Like Experience

PearAI replicates the VS Code interface, allowing you to access familiar features such as:

2. Signing In to PearAI

3. AI-Powered Assistance

Quickstart

Example: Getting Started with a New Codebase and Building a Feature in Just 2 Minutes

"@" Commands

How It Works

@ Commands enhance your prompts by providing additional context, making PearAI more aware of your work environment. Simply type @ in the PearAI chat to see a list of available context options. Each option is powered by a plugin, allowing you to reference specific information easily.

For example, if you're facing issues running an app locally and encountering multiple errors in the terminal, you can use @terminal to include the error logs and @files to attach the package.json file. This enables PearAI to quickly analyze and debug the issue, streamlining the entire troubleshooting process.

Built-in Context Providers

PearAI includes several pre-configured context providers to enhance your workflow. You can customize these by adding or removing providers in config.json under the contextProviders list.

@Files

Enables you to attach a file as context, allowing PearAI to reference its contents for better assistance.

{
  "contextProviders": [
    {
      "name": "files"
    }
  ]
}

Built-in Context Providers

PearAI includes several context providers that help streamline your workflow by incorporating relevant information into your interactions. You can enable or disable these providers in config.json under the contextProviders list.

@Codebase

Includes the entire codebase as context. Be mindful that larger codebases may consume significant credits.

{
  "contextProviders": [
    {
      "name": "codebase"
    }
  ]
}

@Code

Allows you to specify individual functions or classes for more focused assistance.

{
  "contextProviders": [
    {
      "name": "code"
    }
  ]
}

@Docs

Includes a documentation site as context, making it easier to reference official documentation.

{
  "contextProviders": [
    {
      "name": "docs"
    }
  ]
}

@Git Diff

Provides all changes made on the current branch compared to main, useful for code summaries and reviews.

{
  "contextProviders": [
    {
      "name": "diff"
    }
  ]
}

@Terminal

Adds the current terminal output as context, useful for debugging and troubleshooting.

{
  "contextProviders": [
    {
      "name": "terminal"
    }
  ]
}

@Problems

Includes errors and warnings from your current file, aiding in debugging.

{
  "contextProviders": [
    {
      "name": "problems"
    }
  ]
}

@Folder

References all contents within a specified folder for broader context.

{
  "contextProviders": [
    {
      "name": "folder"
    }
  ]
}

@Directory Structure

Provides the project's directory structure as context, allowing the LLM to understand file organization and recent changes.

{
  "contextProviders": [
    {
      "name": "directory"
    }
  ]
}

Configuring and Adding AI Models

PearAI allows users to integrate various AI models for enhanced coding capabilities. Here’s how to configure them:

Access the Model Configuration

Adding an AI Model

Add the added models' configuration can be found in the PearAI config.json file (CMD/CTRL+SHIFT+P > Open config.json).
For Azure OpenAI, the "engine" field is your deployment name.

Important Shortcuts in PearAI

Tab AutoComplete

PearAI supports tab autocomplete, which predicts and suggests what you would type next as you're coding. Here's how to set it up:

Setup Guide

Supermaven is currently one of the best and fastest code autocomplete AIs on the market and provides a generous free tier. Simply install Supermaven directly as an extension within PearAI.

Usage Costs (PearAI Server Only)

PearAI's usage cost is measured in credits. The amount of credits used depends on factors such as the size of input prompts, output responses, model used, and AI tool used (PearAI Chat, PearAI Search, PearAI Creator, etc.).

As an early access benefit, current subscribers will be grandfathered into the early bird pricing, ensuring they maintain these special rates permanently. The $15/month subscription provides greater value than purchasing the equivalent amount of API credits directly from LLM providers, offering access to more usage at a better price point.

It is important to note that longer messages and larger files consume more credits. Similarly, extended conversations will use up credits faster as each previous message is included as context. To optimize credit usage, it is recommended to start new chats frequently. Being more specific in prompts not only saves credits but also leads to more accurate results, as the AI will have less irrelevant data to process.

Subscribers who reach their monthly limit can top up for more credits via the dashboard, with the added benefit that these credits do not expire.

Maximizing PearAI Usage

To get the most out of PearAI, consider the following tips:

Start new conversations: When switching topics or asking unrelated questions, initiating a new conversation helps keep things manageable and optimizes usage.

Avoid re-uploads: After uploading a file, it does not need to be uploaded again within the same conversation, as PearAI remembers the previously uploaded information.

Provide relevant context: While PearAI can access the entire codebase, the best results are achieved by including only the files directly related to the request. This enables PearAI to focus on the most relevant information and provide more accurate and helpful responses.

Available models

PearAI server

Claude 3.5 Sonnet latest

Claude 3.5 Haiku (unlimited, and automatically switched to once user reaches their monthly limit)

GPT-4o latest

OpenAI o1-mini

OpenAI o1-preview

Gemini 1.5 Pro

Common use cases

Easily understand code sections

Autocomplete

Tab to autocomplete code suggestions

Autocomplete

Refactor functions where you are coding

Autocomplete

Ask questions about your codebase

Autocomplete

Quickly use documentation as context

Autocomplete

Kick off actions with slash commands

Autocomplete

Add classes, files, and more to context

Understand terminal errors immediately

PearAI simplifies project development with AI-driven code generation. Here’s an example:

Generating a Minecraft Clone

Running the Project

Why Choose PearAI Over Other IDEs?

Seamless AI Integration

Free and Open-Source

Flexible and Versatile

Continuous Updates

Final Thoughts

PearAI is a powerful AI-driven coding assistant that simplifies development with its seamless integrations and user-friendly interface. Whether you're a beginner or an advanced developer, PearAI provides an intuitive environment for boosting productivity. If you're looking for a free and feature-rich AI-powered IDE, give PearAI a try today.

Explore more

A Developer's Guide to the OpenAI Deep Research API

A Developer's Guide to the OpenAI Deep Research API

In the age of information overload, the ability to conduct fast, accurate, and comprehensive research is a superpower. Developers, analysts, and strategists spend countless hours sifting through documents, verifying sources, and synthesizing findings. What if you could automate this entire workflow? OpenAI's Deep Research API is a significant step in that direction, offering a powerful tool to transform high-level questions into structured, citation-rich reports. The Deep Research API isn't jus

27 June 2025

How to Get Free Gemini 2.5 Pro Access + 1000 Daily Requests (with Google Gemini CLI)

How to Get Free Gemini 2.5 Pro Access + 1000 Daily Requests (with Google Gemini CLI)

Google's free Gemini CLI, the open-source AI agent, rivals its competitors with free access to 1000 requests/day and Gemini 2.5 pro. Explore this complete Gemini CLI setup guide with MCP server integration.

27 June 2025

How to Use MCP Servers in LM Studio

How to Use MCP Servers in LM Studio

The world of local Large Language Models (LLMs) represents a frontier of privacy, control, and customization. For years, developers and enthusiasts have run powerful models on their own hardware, free from the constraints and costs of cloud-based services.However, this freedom often came with a significant limitation: isolation. Local models could reason, but they could not act. With the release of version 0.3.17, LM Studio shatters this barrier by introducing support for the Model Context Proto

26 June 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs