Open Codex CLI is an open-source tool that brings the power of large language models (LLMs) directly into your terminal workflow. This guide focuses specifically on leveraging one of the most advanced models available today – Google's Gemini 2.5 Pro – within the Open Codex CLI environment.
Open Codex CLI is a fork of the original OpenAI Codex CLI, maintaining its core functionality but significantly expanding its capabilities by adding support for multiple AI providers, including Google Gemini. This allows developers to choose the best model for their task while staying within their familiar terminal interface. Imagine asking complex coding questions, generating boilerplate code, refactoring existing functions, explaining complex scripts, or even orchestrating build commands, all driven by Gemini's intelligence without leaving your command line.
This article provides a comprehensive, step-by-step guide to installing, configuring, and using the Open Codex CLI specifically with Google Gemini 2.5 Pro (referring to the gemini-2.5-pro-preview-03-25
model version available at the time of writing, as specified in the Open Codex CLI configuration) and its sibling models like gemini-2.0-flash
. Whether you're a seasoned developer looking to optimize your workflow or new to AI coding assistants, you'll learn how to harness this powerful combination.
Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?
Apidog delivers all your demans, and replaces Postman at a much more affordable price!

Why Open Codex CLI with Gemini?
This is wild. In just a few hours my friend forked Open AI Codex to work with Gemini 2.5 Pro 🤯 pic.twitter.com/gENT01q82P
— Sawyer Hood (@sawyerhood) April 17, 2025
Combining Open Codex CLI with Gemini 2.5 Pro offers a compelling suite of benefits:
- Terminal-Native Workflow: For developers who prefer the command line, Open Codex CLI provides a seamless integration. No need to switch contexts between your editor, terminal, and a separate browser window for AI assistance. This leads to faster iteration and less disruption.
- Open Source and Flexible: Being open-source (
open-codex
on npm), the tool offers transparency and the potential for community contributions. Its multi-provider support (Gemini, OpenAI, OpenRouter, Ollama) gives you the flexibility to switch models or providers as needed without changing your core tooling. - Powerful AI Capabilities: Gemini 2.5 Pro brings state-of-the-art reasoning, code generation, and understanding capabilities. This allows for more complex tasks, better code quality suggestions, and deeper analysis directly within the CLI.
- Action-Oriented: Unlike simple chatbots, Open Codex CLI is designed for action. It can read your local files for context, propose file modifications (patches), and execute shell commands within a secure sandbox, enabling true chat-driven development.
- Security Focused: The CLI incorporates sandboxing mechanisms (Apple Seatbelt on macOS, Docker recommended on Linux) to execute potentially risky operations like shell commands safely, especially when using auto-approval modes. It runs commands network-disabled and directory-sandboxed by default in higher autonomy modes.
- Version Control Integration: The tool is designed to work alongside Git. By default, it requires approval before modifying files, allowing you to review changes before they affect your working directory, ensuring you always have a safety net.
Prerequisites
Before you begin, ensure you have the following:
- Operating System: macOS 12 or newer, a modern Linux distribution (like Ubuntu 20.04+, Debian 10+), or Windows 10/11 using the Windows Subsystem for Linux (WSL2). Direct Windows support is not available.
- Node.js: Version 22 or newer is required. LTS (Long-Term Support) versions are generally recommended. You can check your version by running
node -v
in your terminal. Download Node.js from nodejs.org. - Google Cloud Account & Gemini API Key: You'll need an API key to authenticate requests to the Gemini models. You can obtain one through the Google AI Studio or the Google Cloud Console. Keep this key secure, as it's linked to your account.
- Terminal Familiarity: Basic knowledge of using your system's command-line interface (Terminal, iTerm, Bash, Zsh, etc.) is assumed.
Step 1: Installing Open Codex CLI
The recommended way to install Open Codex CLI is globally via npm (Node Package Manager), which comes bundled with Node.js. Open your terminal and run:
npm install -g open-codex
Alternatively, if you use Yarn as your package manager, you can run:
yarn global add open-codex
This command downloads the open-codex
package and makes the open-codex
command accessible from anywhere in your terminal. Avoid using sudo
for global npm installs; if you encounter permission issues, it's better to fix npm permissions.
You can verify the installation by running open-codex --version
or open-codex --help
.
Step 2: Configuring Gemini Access
To use Gemini models, Open Codex CLI needs your API key and needs to know you want to use the gemini
provider. There are two main ways to configure this:
Method 1: Environment Variable (Recommended for Quick Start/Testing)
The simplest way to provide your API key is through an environment variable. The Open Codex CLI specifically looks for GOOGLE_GENERATIVE_AI_API_KEY
when the gemini
provider is selected.
In your terminal, run the following command, replacing "YOUR_API_KEY_HERE"
with your actual Gemini API key:
export GOOGLE_GENERATIVE_AI_API_KEY="YOUR_API_KEY_HERE"
Important: This command sets the environment variable only for the current terminal session. If you close the terminal or open a new one, you'll need to run the command again.
To make the API key available permanently, you need to add the export
line to your shell's configuration file. Common files include:
~/.zshrc
(for Zsh, the default on recent macOS)~/.bashrc
or~/.bash_profile
(for Bash)
Add the line export GOOGLE_GENERATIVE_AI_API_KEY="YOUR_API_KEY_HERE"
to the appropriate file, save it, and then either restart your terminal or run source ~/.zshrc
(or the relevant file path) to apply the changes.
You can then specify Gemini as the provider on each invocation using the --provider
flag:
open-codex --provider gemini "What is the capital of France?"
Method 2: Configuration File (Recommended for Persistent Setup)
For a more permanent setup, especially if you plan to use Gemini consistently, you can use the Open Codex CLI configuration file. The CLI looks for a configuration file at ~/.codex/config.json
.
Create the directory if it doesn't exist: mkdir -p ~/.codex
Create and open the configuration file: nano ~/.codex/config.json
(or use your preferred text editor).
Add the following JSON content to specify Gemini as the default provider:
{
"provider": "gemini"
}
You can also optionally set a default Gemini model. The Open Codex CLI defines gemini-2.5-pro-preview-03-25
as the default "agentic" model and gemini-2.0-flash
as the default "full context" model for Gemini. To explicitly set the powerful 2.5 Pro model as the default for most interactions, use:
{
"provider": "gemini",
"model": "gemini-2.5-pro-preview-03-25"
}
Save the file and exit the editor.
Crucially, even when using the configuration file to set the provider and model, you still need to set the GOOGLE_GENERATIVE_AI_API_KEY
environment variable as described in Method 1. The configuration file tells the CLI which provider to use, while the environment variable provides the credentials for that provider.
With the configuration file set, you can now simply run open-codex
and it will default to using the Gemini provider specified:
open-codex "What is the capital of France?"
Step 3: Basic Usage with Gemini
Now that Open Codex CLI is installed and configured for Gemini, let's explore how to interact with it.
Interactive Mode (REPL)
For a chat-like experience, run the CLI without a specific prompt:
open-codex
# Or if you haven't set the default provider in config.json:
# open-codex --provider gemini
This starts a Read-Eval-Print Loop (REPL) where you can type prompts, get responses, and have a continuous conversation with Gemini.
❯ open-codex --provider gemini
Welcome to Open Codex CLI! Type your request, then hit Enter twice to send.
> Tell me about the Gemini 2.5 Pro model.
Hit Enter twice to send the prompt. Gemini will respond within the terminal. You can continue the conversation by typing follow-up questions.
Direct Prompting
For one-off requests, you can pass the prompt directly as an argument. If you haven't set defaults in config.json
, you'll need the provider flag. You can also specify the exact model using the --model
or -m
flag:
- Using default Gemini model (if set in config):
open-codex "Explain the concept of closures in Python."
- Explicitly specifying provider and model:
open-codex --provider gemini --model gemini-2.5-pro-preview-03-25 "Write a bash script to find all files larger than 10MB in the current directory and its subdirectories."
- Using a different Gemini model (e.g., Flash for potentially faster/cheaper tasks):
open-codex --provider gemini --model gemini-2.0-flash "Summarize the main points of the React documentation on hooks."
The CLI will process the request with Gemini and print the response directly to the standard output.
Step 4: Leveraging Open Codex Features with Gemini
The true power of Open Codex CLI lies in its ability to interact with your local development environment, guided by Gemini.
File System Interaction
Gemini, via Open Codex CLI, can read files in your current project to gain context for your requests. It can also propose changes (writes or patches) to your files.
- Reading Files: When you mention filenames in your prompt, the CLI often automatically reads them to provide context to Gemini.
# Assuming you have a file named 'calculate.js'
open-codex --provider gemini "Read 'calculate.js' and suggest improvements for readability."
- Writing/Patching Files: If you ask Gemini to modify a file, the CLI will generate a diff (a summary of proposed changes) and ask for your approval before applying anything.
open-codex --provider gemini "Add a new function called 'subtract' to 'calculate.js' that takes two arguments and returns their difference."
The CLI will output the proposed changes and prompt: Apply patch? [y/N]
. Press y
to accept or n
to reject.
Command Execution
Open Codex CLI can also execute shell commands suggested by Gemini. This is powerful but requires caution.
- Approval Modes: The
--approval-mode
(or-a
) flag controls the level of autonomy: suggest
(default): Requires explicit approval for all file changes and all shell commands. Recommended for starting.auto-edit
: Automatically applies file changes but still asks for approval for shell commands.full-auto
: Automatically applies file changes AND executes shell commands (within the sandbox). Use with extreme caution, especially in untrusted repositories.- Sandboxing: In
auto-edit
andfull-auto
modes, commands are run network-disabled and confined to the current working directory and temporary files for security. - Example (with default 'suggest' mode):
open-codex --provider gemini "Install the 'requests' library using pip."
Gemini might propose the command pip install requests
. The CLI will show the command and ask: Run command? [y/N]
.
Project Context (codex.md
)
You can provide persistent instructions or context about your project to Gemini by creating codex.md
files. The CLI reads these files in order:
~/.codex/instructions.md
(Global, personal instructions)codex.md
at your repository root (Project-wide notes)codex.md
in the current working directory (Sub-directory specific notes)
This allows you to guide Gemini's behavior or provide information about project standards, libraries used, etc., without repeating it in every prompt.
Troubleshooting & Tips
- API Key Errors: Ensure
GOOGLE_GENERATIVE_AI_API_KEY
is correctly set, exported, and accessible in your current terminal session. Double-check for typos. - Model Not Found: Verify the model name (
gemini-2.5-pro-preview-03-25
,gemini-2.0-flash
, etc.) matches those supported or configured. Check the~/.codex/config.json
or use the--model
flag correctly. - Provider Errors: Ensure the provider is set to
gemini
either viaconfig.json
or the--provider gemini
flag. - Verbose Logging: For debugging, run commands with the
DEBUG=true
environment variable prepended:DEBUG=true open-codex --provider gemini "My prompt"
. This will print detailed request/response information. - Start Simple: Begin with straightforward prompts and gradually increase complexity as you get comfortable with how Gemini and the CLI interact.
- Review Approvals Carefully: Especially when dealing with file modifications or command execution, always review the proposed changes or commands before approving them (answering
y
).
Conclusion
The Open Codex CLI, supercharged by Google's Gemini 2.5 Pro, transforms your terminal into an intelligent coding assistant. By following the steps outlined in this guide, you can seamlessly integrate advanced AI capabilities into your daily development workflow, directly from the command line.
From quick code snippets and explanations to complex refactoring and script execution, this combination offers significant potential for boosting productivity and streamlining tasks. The tool's focus on security through sandboxing and user approvals, combined with the flexibility of multi-provider support and its open-source nature, makes Open Codex CLI a compelling choice for developers looking to leverage AI within their terminal.
Experiment with different prompts, explore the various approval modes (cautiously!), and discover how Gemini and Open Codex CLI can enhance your coding experience.
Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?
Apidog delivers all your demans, and replaces Postman at a much more affordable price!