How Can You Install Firecrawl CLI and Use Firecrawl CLI

This in-depth technical guide covers every Firecrawl CLI command, flag, authentication method, agent integration, and optimization tip for clean LLM-ready data extraction.

Herve Kom

Herve Kom

17 March 2026

How Can You Install Firecrawl CLI and Use Firecrawl CLI

Firecrawl CLI is a unified terminal tool that lets AI agents and developers scrape, search, map, crawl, and automate browsers on any website with clean markdown, JSON, screenshots, and more written directly to your filesystem. Run Firecrawl CLI via npx firecrawl (no install needed) or install globally, then connect to Claude Code, Cursor, or OpenCode with a single firecrawl init command that adds the skill automatically.

You install Firecrawl CLI because AI agents and developers need reliable, real-time web data without brittle custom scripts or blocked requests. Firecrawl CLI unifies scraping, web search, site mapping, recursive crawling, and cloud browser sessions into one terminal-native tool. It outputs clean markdown, structured JSON, screenshots, or HTML directly to your filesystem, keeping token counts low and context precise for LLMs. Agents like Claude Code, Cursor, and OpenCode leverage Firecrawl CLI daily to fetch fresh content from JavaScript-rendered pages, dynamic sites, or protected flows that traditional tools cannot handle.

💡
Before you fire up your first Firecrawl CLI command, grab Apidog for free. It lets you visually test and debug the Firecrawl API endpoints that Firecrawl CLI uses under the hood API keys, custom params, response shapes all in one clean interface. Saves you a ton of trial-and-error when setting up or troubleshooting agent integrations.

You prepare your system, install Firecrawl CLI, authenticate, explore core commands, integrate with agents, and apply best practices. Firecrawl CLI manages concurrency, rate limits, and local caching automatically so you concentrate on extracting valuable data. Precise flag choices in Firecrawl CLI such as format selectors or wait timers create substantial improvements in output quality and efficiency.

What Firecrawl CLI Delivers and Why It Outperforms Traditional Web Tools

Firecrawl CLI renders JavaScript natively through cloud browsers, respects anti-bot protections, and delivers >80% content recall on complex sites where cheerio-based or basic Puppeteer scripts fail. You receive LLM-optimized markdown by default, stripped of boilerplate, which reduces context window pressure when feeding results to agents.

Firecrawl CLI writes files locally instead of streaming large payloads, enabling bash-powered search over scraped content without repeated API calls. You combine Firecrawl CLI scrape, search, map, crawl, and browser commands in scripts or agent loops seamlessly. These capabilities eliminate the need for separate libraries, headless instances, or proxy rotations. Small decisions like using --only-main-content in Firecrawl CLI yield cleaner, cheaper outputs that compound into major productivity gains.

Preparing Your Environment Before Installing Firecrawl CLI

You verify Node.js ≥18 because Firecrawl CLI depends on modern npm features. Run node --version in your terminal. Update via your package manager or nvm if needed.

You create a workspace directory to organize Firecrawl CLI outputs:

mkdir firecrawl-cli-projects && cd firecrawl-cli-projects

This prevents clutter and makes it easy to git-track datasets. You optionally disable telemetry:

export FIRECRAWL_NO_TELEMETRY=1

Installing Firecrawl CLI Using the Recommended Init Method for Agents

The fastest path installs Firecrawl CLI, authenticates, and adds agent skills in one step. Execute:

npx -y firecrawl-cli@latest init --all --browser

Firecrawl CLI opens your browser for Firecrawl account login (or signup), generates/stores your API key securely, and configures skills for Claude Code, Cursor, and other compatible agents. Restart your agent afterward so it detects the new Firecrawl CLI capabilities. This method equips Firecrawl CLI globally and enables MCP/serverless browser access.

Installing Firecrawl CLI Globally via npm for Frequent Use

For permanent, low-latency access across projects, install Firecrawl CLI globally:

npm install -g firecrawl-cli

Verify with:

firecrawl --version

Firecrawl CLI now responds instantly from any directory without npx overhead.

Authenticating Firecrawl CLI and Checking Your Configuration

Authentication unlocks full Firecrawl CLI features. Run:

firecrawl login

Firecrawl CLI prompts browser-based OAuth. Alternatively, set your key manually:

export FIRECRAWL_API_KEY=fc-your-key-here

Check status anytime:

firecrawl --status

This displays credits, concurrency limits, and auth state. View full config:

firecrawl view-config

Switch accounts with firecrawl logout then re-login. For local/self-hosted Firecrawl instances, use --api-url http://localhost:3002 to bypass cloud auth and credits.

Mastering the Scrape Command in Firecrawl CLI

You extract content from any URL with:

firecrawl scrape https://example.com --only-main-content

Firecrawl CLI returns clean markdown and saves to ./output.md when you add -o output.md. Always prefer --only-main-content to remove nav, ads, and sidebars, slashing token usage.

Request multiple formats:

firecrawl scrape https://example.com --format markdown,json,html,links,images --pretty

Firecrawl CLI outputs structured JSON containing all requested data. Capture screenshots: --screenshot or --full-page-screenshot. Handle slow loaders with --wait-for 5000.

Filter precisely:

firecrawl scrape https://docs.example.com --include-tags main,article --exclude-tags nav,footer,script

Add --timing to benchmark performance. Firecrawl CLI stores results locally, ready for piping or agent ingestion.

Performing Web Search with Firecrawl CLI

You search the internet and scrape top results together:

firecrawl search "latest AI agent benchmarks" --scrape --limit 8 --scrape-formats markdown

Firecrawl CLI fetches results, extracts content, and saves files. Filter by recency --tbs qdr:w, location, or source type. Combine search with browser sessions for deeper verification. Firecrawl CLI therefore supports full research loops in one tool.

Mapping Websites Using Firecrawl CLI

Discover all URLs before deep extraction:

firecrawl map https://example.com -o sitemap.json

Firecrawl CLI returns a structured list with metadata. Feed filtered URLs into scrape or crawl commands. Firecrawl CLI honors robots.txt and polite crawling automatically.

Crawling Entire Sites Recursively with Firecrawl CLI

Crawl comprehensively:

firecrawl crawl https://example.com --wait --progress -o crawl-output.json

Firecrawl CLI follows internal links, scrapes pages, and stores everything locally. Control depth, max pages, and concurrency to manage costs. Real-time progress reporting lets you monitor or cancel large jobs.

Automating Browser Sessions in Firecrawl CLI

Handle interactive flows with cloud browsers:

firecrawl browser launch-session

Firecrawl CLI returns a session ID. Execute actions:

firecrawl browser execute "open https://news.ycombinator.com" --session <id>
firecrawl browser execute "click .titleline > a" --session <id>
firecrawl browser execute "scrape" --session <id>

Firecrawl CLI supports clicks, typing, navigation, and extraction after dynamic interactions. Close sessions to free resources. Firecrawl CLI replaces complex Puppeteer code with simple, agent-readable commands.

Advanced Firecrawl CLI Configuration and Global Flags

Customize persistently:

firecrawl config --api-url https://your-custom-endpoint --concurrency 5

Firecrawl CLI applies these on every run. Force JSON output globally or adjust headers. Monitor credits before big operations with --status. Export FIRECRAWL_API_KEY in your shell profile for seamless sessions.

Integrating Firecrawl CLI with AI Coding Agents

Install the Firecrawl CLI skill once (npx -y firecrawl-cli@latest init --all), and agents discover it automatically. In CLI + Skills mode, agents run Firecrawl CLI commands explicitly when needed. In MCP mode, agents call native tools invisibly.

Firecrawl CLI returns local file paths instead of raw content, preserving lean context windows. Agents therefore perform reliable web research without extra prompting.

Troubleshooting Firecrawl CLI Issues Efficiently

Authentication fails? Re-run firecrawl login. Rate limits hit? Lower concurrency or check dashboard for plan upgrades. Empty results on JS-heavy sites? Increase --wait-for or enable --only-main-content. Use --timing for diagnostics. Clear credentials with firecrawl logout when switching keys.

Best Practices to Get the Most from Firecrawl CLI

Always include --only-main-content for noise-free markdown. Use descriptive output filenames and dedicated folders. Test small scopes before full crawls. Combine search → map → crawl pipelines. Version-control output dirs for reproducible datasets. Review weekly credit usage to stay efficient. These habits keep Firecrawl CLI fast, cost-effective, and dependable.

Complementing Firecrawl CLI Workflows with Apidog

Download Apidog for free and import Firecrawl endpoints (scrape, search, crawl, etc.) into collections. Apidog visualizes requests, stores your Firecrawl CLI API key as a variable, mocks responses, and runs automated tests. You debug complex Firecrawl CLI options or custom payloads before terminal execution. Firecrawl CLI + Apidog delivers end-to-end confidence: current web data plus verified API behavior.

Conclusion

You now command every aspect of Firecrawl CLI from installation and authentication to advanced scraping, searching, mapping, crawling, and browser automation in Firecrawl CLI. Firecrawl CLI turns chaotic web access into a clean, terminal-first pipeline that powers agents and developers alike.

Run the init command today, test a scrape, and build from there. Firecrawl CLI rewards careful flag usage and experimentation with dramatically better results.

Download Apidog for free now to supercharge your Firecrawl CLI testing and API validation. Install Firecrawl CLI, use Firecrawl CLI, and unlock real-time web mastery.

button

Additional resources

Explore more

How Can You and Use Google Workspace CLI

How Can You and Use Google Workspace CLI

This technical guide covers every Google Workspace CLI command, authentication method, agent skill setup for Claude Code and Cursor, dynamic discovery features, and best practices for humans and AI agents.

17 March 2026

How to install and use the Context7 CLI

How to install and use the Context7 CLI

Install & use Context7 CLI (ctx7) to inject up-to-date library docs into Claude, Cursor, or OpenCode. Stop AI hallucinations with real-time, version-specific documentation via npx ctx7 setup. Free tier available.

17 March 2026

How to install and use the Resend CLI

How to install and use the Resend CLI

Learn how to install and use the official Resend CLI: send emails, manage domains/API keys, schedule broadcasts, handle webhooks & templates from your terminal. Open-source tool for developers.

17 March 2026

Practice API Design-first in Apidog

Discover an easier way to build and use APIs