Have you ever wished you could tap into the power of AI for truly in-depth research, but felt limited by closed-source solutions and data privacy concerns? Well, get ready to roll up your sleeves because in this article, we're building our very own Open-Source Deep Research Tool powered by the Gemini API!
That's right, we're creating a research powerhouse that's completely under your control. You can think of it as having a dedicated AI research assistant at your fingertips, ready to dive deep into any topic you throw its way. And the best part? We'll be leveraging the Gemini API to get accurate and insightful analysis. Let's get started!
We’re thrilled to share that MCP support is coming soon to Apidog! 🚀
— Apidog (@ApidogHQ) March 19, 2025
Apidog MCP Server lets you feed API docs directly to Agentic AI, supercharging your vibe coding experience! Whether you're using Cursor, Cline, or Windsurf - it'll make your dev process faster and smoother.… pic.twitter.com/ew8U38mU0K

What is Open-Source Deep Research?
Open-Source Deep Research is about taking control of your research process. It's about leveraging the power of AI while maintaining transparency, privacy, and the ability to customize your tools to fit your specific needs. By building our own research tool, we avoid the limitations and potential biases of closed-source solutions and ensure that our data remains secure. The MCP can facilitate this process, ensuring proper integration with various APIs and functionalities for enhanced performance.
Why Use the Gemini API for Deep Research?
The Gemini API offers a cutting-edge approach to artificial intelligence, enabling more intuitive and effective human-machine interactions. It excels by providing not only text-based insights but also by supporting multimodal inputs, allowing for a richer understanding of research materials through images, videos, and audio. This makes it particularly valuable for comprehensive research projects that require analyzing diverse data types. Additionally, the API's flexible design and strong developer support encourage innovation and customization, enabling researchers to tailor the tool to meet their specific needs and contexts, ultimately fostering a deeper and more nuanced understanding of complex subjects.

Key Features of Our Open-Source Deep Research Tool
Before we dive into the build process, let's take a look at some of the features we'll be bringing to life:
- Rapid Deep Research for fast insights.
- Multi-platform Support for seamless access.
- Powered by Google Gemini for advanced AI capabilities.
- Thinking & Networking Models for intelligent analysis.
- Canvas Support for visual organization.
- Research History to track progress.
- Local & Server API Support for flexibility.
- Privacy-Focused design for secure research.
- Multi-Key Payload Support for enhanced functionality.
- Multi-language Support: English, 简体中文.
- Built with Modern Technologies for efficiency and performance.
Getting Started with Open-Source Deep Research Tool
Ready to build your own AI-powered research assistant? Here's what you'll need to get started:
1. Get Gemini API Key: First and foremost, you'll need a Gemini API key to access the power of Google's AI models. Head over to Google AI Studio and sign up for an API key. Keep this key safe and secure – it's your passport to the world of Gemini!

2. One-click Deployment (Optional): For the quickest possible start, you can use the one-click deployment options:
- Deploy with Vercel (Instructions for Vercel are typically straightforward and require linking your GitHub repository and Vercel account).

- Deploy with Cloudflare (Currently the project supports deployment to Cloudflare, but you need to follow How to deploy to Cloudflare Pages to do it).

These options will get your research tool up and running in minutes, but for the full customization experience, we'll be focusing on local development.
Develop the Open-Source Deep Research Tool
Let's dive into the heart of the build process! Follow these steps to get Deep Research up and running on your local browser.
Prerequisites
Before we get started, make sure you have the following installed on your system:
- Node.js: (version 18.18.0 or later recommended). You can download it from the official Node.js website.

- pnpm or npm or yarn: These are package managers for Node.js. We'll be using pnpm in this tutorial, but you can use whichever one you prefer.
Installation
1. Clone the repository:
git clone https://github.com/u14app/deep-research.git
cd deep-research
This will download the code from GitHub and move you into the project directory.
2. Install dependencies:
pnpm install # or npm install or yarn install
This command will install all the necessary packages for the project.
3. Set up Environment Variables:
This is a crucial step! You'll need to create a .env file in the root directory of your project and configure the following environment variables:
# (Optional) Server-side Gemini API Key (Required for server API calls)
GOOGLE_GENERATIVE_AI_API_KEY=YOUR_GEMINI_API_KEY
# (Optional) Server API Proxy URL. Default, `https://generativelanguage.googleapis.com`
API_PROXY_BASE_URL=
# (Optional) Server API Access Password for enhanced security
ACCESS_PASSWORD=
# (Optional) Injected script code can be used for statistics or error tracking.
HEAD_SCRIPTS=
Replace YOUR_GEMINI_API_KEY with the actual API key you obtained from Google AI Studio.
Important Notes on Environment Variables:
- GOOGLE_GENERATIVE_AI_API_KEY: Optional but required for using the server-side API. You need to obtain a Google Generative AI API key from Google AI Studio. This key should be kept secret and never committed to your public repository.
- API_PROXY_BASE_URL: Optional. If you need to use a proxy server for API requests, configure this variable with your proxy server's base URL. This is relevant for server-side API calls.
- ACCESS_PASSWORD: Optional but highly recommended for server-side deployments. Set a strong password to protect your server-side API endpoints. This password will be required to access server-side API functionalities.
- HEAD_SCRIPTS: Optional Injected script code can be used for statistics or error tracking.
Privacy Reminder: These environment variables are primarily used for server-side API calls. When using the local API mode, no API keys or server-side configurations are needed, further enhancing your privacy.
Multi-key Support: Supports multiple keys, each key is separated by ,, i.e. key1,key2,key3. Cloudflare cannot use multi-key for the time being because the official build script does not support Next.js 15.
4. Run the development server:
pnpm dev # or npm run dev or yarn dev
This will start the development server, and you can access Deep Research in your browser at http://localhost:3000.
- Start asking any questions you need to research on.

- And view the results!

Deploy the Open-Source Deep Research Tool
Once you're happy with your local setup, you can deploy your research tool to the cloud! Here are a few popular options:
1. Vercel: Deploy with Vercel (This is usually the easiest option).
2. Cloudflare: Currently the project supports deployment to Cloudflare, but you need to follow How to deploy to Cloudflare Pages to do it.
3. Docker:
- Docker version needs to be 20 or above.
- Pull the pre-built image:
docker pull xiangfa/deep-research:latest
- Run the container:
docker run -d --name deep-research -p 3333:3000 xiangfa/deep-research
You can also specify environment variables:
docker run -d --name deep-research \
-p 3333:3000 \
-e GOOGLE_GENERATIVE_AI_API_KEY=AIzaSy... \
-e ACCESS_PASSWORD=your-password \
xiangfa/deep-research
- Or build your own docker image:
docker build -t deep-research .
docker run -d --name deep-research -p 3333:3000 deep-research
- Deploy using docker-compose.yml:
version: '3.9'
services:
deep-research:
image: xiangfa/deep-research
container_name: deep-research
environment:
- GOOGLE_GENERATIVE_AI_API_KEY=AIzaSy...
- ACCESS_PASSWORD=your-password
ports:
- 3333:3000
Then build your own docker compose:
docker compose -f docker-compose.yml build
4. Static Deployment:
- You can also build a static page version directly and then upload all files in the out directory to any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc.
pnpm build:export
Open-Source Deep Research Tool Configuration
As mentioned in the "Getting Started with Open-Source Deep Research Tool" section, Deep Research utilizes the following environment variables for server-side API configurations:
- GOOGLE_GENERATIVE_AI_API_KEY
- API_PROXY_BASE_URL
- ACCESS_PASSWORD
These variables are only required if you intend to use the server-side API calling functionality. For local API calls, no configuration is necessary beyond setting up the project.
Remember to always keep your API keys and passwords secure!
Conclusion: Empowering Your Research with AI
You've now successfully built your own Open Source Deep Research tool powered by the Gemini API! This is a huge step towards unlocking the full potential of AI in your research process.
By building your own tool, you gain complete control over your data, customize your workflows, and contribute to the open-source community. Experiment with different research models, explore the Gemini API's capabilities, and create custom tools to truly personalize your research experience.
The future of research is intelligent and open. Embrace Open-Source Deep Research and empower yourself with the knowledge you need!
