How to Run EXAONE Deep Locally Using Ollama

Discover how to run EXAONE Deep, LG’s inference AI model, locally with Ollama. This technical guide covers installation, setup, and API testing with Apidog for developers and researchers.

Ashley Innocent

Ashley Innocent

20 March 2025

How to Run EXAONE Deep Locally Using Ollama

Running advanced AI models locally has become a practical solution for developers and researchers who need speed, privacy, and control. EXAONE Deep, an innovative inference AI model from LG AI Research, excels at solving complex problems in math, science, and coding. By using Ollama, a platform designed to deploy large language models on local hardware, you can set up EXAONE Deep on your own machine with ease.

💡
Boost Your Workflow with Apidog
Working with AI models like EXAONE Deep often involves API integration. Apidog is a free, powerful tool that makes API testing and debugging a breeze. Download Apidog today to streamline your development and ensure smooth communication with your local AI setup.
button

Let’s dive into the process.

What Are EXAONE Deep and Ollama?

Before we proceed, let’s clarify what EXAONE Deep and Ollama are and why they matter.

EXAONE Deep is a cutting-edge AI model developed by LG AI Research. Unlike typical language models, it’s an inference AI, meaning it focuses on reasoning and problem-solving. It autonomously generates hypotheses, verifies them, and provides answers to complex questions in fields like mathematics, science, and programming. This makes it a valuable asset for anyone tackling technical challenges.

Meanwhile, Ollama is an open-source platform that lets you run large language models, including EXAONE Deep, on your local machine. It uses containerization to manage the model’s dependencies and resources, simplifying the deployment process. By running EXAONE Deep locally with Ollama, you gain several advantages:

Prerequisites for Running EXAONE Deep Locally

To run EXAONE Deep locally, your system must meet certain hardware and software standards. Since this is a resource-heavy AI model, having the right setup is critical. Here’s what you need:

Hardware Requirements

Software Requirements

With these in place, you’re ready to install Ollama and get EXAONE Deep running. Let’s transition to the installation process.

Installing Ollama on Your System

Ollama is your gateway to running EXAONE Deep locally, and its installation is straightforward. Follow these steps to set it up:

Download Ollama:

curl -fsSL https://ollama.ai/install.sh | sh

This script automates the download and setup.

Check the Installation:

ollama --version

Once Ollama is installed, you’re set to download and run EXAONE Deep. Let’s move to that next.

Setting Up and Running EXAONE Deep with Ollama

Now that Ollama is ready, let’s get EXAONE Deep up and running. This involves downloading the model and launching it locally.

Step 1: Download the EXAONE Deep Model

Ollama hosts EXAONE Deep in its model library. To pull it to your machine, run:

ollama pull exaone-deep

This command fetches the model files. Depending on your internet speed and the model’s size (which can be several gigabytes), this might take a few minutes. Watch the terminal for progress updates.

Step 2: Launch the Model

Once downloaded, start EXAONE Deep with:

ollama run exaone-deep

This command fires up the model, and Ollama spins up a local server. You’ll see a prompt where you can type questions or commands. For example:

> Solve 2x + 3 = 7

The model processes it and returns the answer (e.g., x = 2).

Step 3: Customize Settings (Optional)

Ollama lets you tweak how EXAONE Deep runs. For instance:

At this point, EXAONE Deep is operational. However, typing prompts in the terminal isn’t the only way to use it. Next, we’ll explore how to interact with it programmatically using its API—and how Apidog fits in.

Using Apidog to Interact with EXAONE Deep

For developers building applications, accessing EXAONE Deep via its API is more practical than the command line. Fortunately, Ollama provides a RESTful API when you run the model. Here’s where Apidog, an API testing tool, becomes invaluable.

Understanding the Ollama API

When you launch EXAONE Deep with ollama run exaone-deep, it opens a local server, typically at http://localhost:11434. This server exposes endpoints like:

Setting Up Apidog

Follow these steps to use Apidog with EXAONE Deep:

Install Apidog:

button

Create a New Request:

Configure the Request:

{
  "model": "exaone-deep",
  "prompt": "What is the square root of 16?",
  "stream": false
}

Send and Test:

Why Use Apidog?

Apidog simplifies API work by:

With Apidog, integrating EXAONE Deep into your projects becomes seamless. But what if you hit a snag? Let’s cover troubleshooting next.

Troubleshooting Common Issues

Running a model like EXAONE Deep locally can sometimes trip you up. Here are common problems and fixes:

Problem: GPU Memory Error

Problem: Model Won’t Start

Problem: API Doesn’t Respond

Optimization Tip

For better performance, upgrade your GPU or add RAM. EXAONE Deep thrives on strong hardware.

With these solutions, you’ll keep your setup humming. Let’s wrap up.

Conclusion

Running EXAONE Deep locally using Ollama unlocks a world of AI-powered reasoning without cloud dependency. This guide has shown you how to install Ollama, set up EXAONE Deep, and use Apidog to interact with its API. From solving math problems to coding assistance, this setup empowers you to tackle tough tasks efficiently.

Ready to explore? Fire up Ollama, download EXAONE Deep, and grab Apidog to streamline your workflow. The power of local AI is at your fingertips.

button

Explore more

How to Quickly Build a MCP Server for Claude Code

How to Quickly Build a MCP Server for Claude Code

The Model Context Protocol (MCP) revolutionizes how AI assistants interact with external tools and data sources. Think of MCP as a universal USB-C port for AI applications—it provides a standardized way to connect Claude Code to virtually any data source, API, or tool you can imagine. This comprehensive guide will walk you through building your own MCP server from scratch, enabling Claude Code to access custom functionality that extends its capabilities far beyond its built-in features. Whether

12 June 2025

How to Integrate Claude Code with VSCode and JetBrains?

How to Integrate Claude Code with VSCode and JetBrains?

Learn how to integrate Claude Code with VSCode and JetBrains in this technical guide. Step-by-step setup, configuration, and usage tips for developers. Boost your coding with Claude Code!

10 June 2025

How to Generate Google Veo 3 Prompt Theory Videos (Google Veo 3 Prompt Guide)

How to Generate Google Veo 3 Prompt Theory Videos (Google Veo 3 Prompt Guide)

Learn how to craft effective prompts for Google Veo 3 to generate dynamic and expressive videos.

10 June 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs