How to Run Deepseek R1 Locally Using Ollama ?

Learn how to run DeepSeek R1 locally using Ollama in this comprehensive guide. Discover step-by-step instructions, prerequisites, and how to test the API with Apidog.

Ashley Innocent

Ashley Innocent

10 February 2025

How to Run Deepseek R1 Locally Using Ollama ?

Are you excited to hear all about the Deepseek R1 model? It's pretty awesome, now you can easily run it on your machine. So, it's time to buckle up!

💡
Ready to supercharge your API testing? Download Apidog for free and take your Deepseek R1 experiments to the next level! With Apidog, you can easily create, test, and document your APIs, making it the perfect companion for your local AI adventures. Don't miss out on this powerful tool that'll streamline your workflow and boost your productivity!
button

What is deepseek R1 ?

DeepSeek R1 is a cutting-edge reasoning model that has garnered attention for its performance across various tasks, including mathematics, coding, and logical reasoning. Running this model locally offers several advantages, such as reduced latency, enhanced privacy, and greater control over your AI applications. Ollama, a versatile tool, facilitates the deployment and execution of such models on local machines.

DeepSeek R1: The Best Open-Source Reasoning LLM Challenging OpenAI’s o1
What is DeepSeek R1 and Why Should You Care? Alright, let’s get down to business. DeepSeek R1 is the brainchild of DeepSeek AI , a Chinese AI research lab that’s been quietly working on something extraordinary.

Prerequisites: Setting the Stage

Alright, let's make sure we've got everything we need before we start this adventure:

  1. A computer with a decent CPU and GPU (the beefier, the better!)
  2. Ollama installed on your system
  3. Some basic command-line knowledge
  4. A thirst for AI knowledge (which I'm sure you've got in spades!)

Setting Up Ollama

Ollama streamlines the process of running AI models locally. To set it up:

Download Ollama:

Install Ollama:

Verify Installation:

Step-by-Step Guide to Running Deepseek R1 Locally

Step 1: Downloading the Deepseek R1 Model

First things first, we need to get our hands on the Deepseek R1 model. Luckily, Ollama makes this super easy. Open up your terminal and type:

ollama run deepseek-r1

This command tells Ollama to download the  billion parameter version of Deepseek R1. Sit back and relax while it does its thing – depending on your internet speed, this might take a while. Maybe grab a coffee or do some stretches?

Step 2: Verifying the Installation

Once the download is complete, let's make sure everything is in order. Run this command:

ollama list

You should see deepseek-r1:8b in the list of available models. If you do, give yourself a pat on the back – you're halfway there!

Step 3: Running Deepseek R1

Now for the moment of truth – let's fire up Deepseek R1! Use this command:

ollama run deepseek-r1

And just like that, you're conversing with one of the most advanced AI models out there, right from your own computer. How cool is that?

Step 4: Interacting with Deepseek R1

Once the model is running, you'll see a prompt where you can start typing. Go ahead, ask it something! Here are a few ideas to get you started:

Feel free to get creative – Deepseek R1 is quite versatile!

Advanced Usage: Customizing Your Experience

Now that you've got the basics down, let's explore some advanced features to really make the most of your local Deepseek R1 setup.

Using Deepseek R1 in Your Projects

Want to integrate Deepseek R1 into your Python projects? Ollama's got you covered! Here's a quick example:

import ollama

response = ollama.chat(model='deepseek-r1', messages=[
    {
        'role': 'user',
        'content': 'Explain the concept of recursion in programming.',
    },
])

print(response['message']['content'])

This opens up a world of possibilities for AI-powered applications right on your local machine!

Troubleshooting Common Issues

Even the smoothest journeys can hit a few bumps, so let's address some common issues you might encounter:

  1. Model not found: Double-check that you've successfully pulled the model using ollama pull deepseek-r1b.
  2. Out of memory errors: Deepseek R1 is a hefty model. If you're running into memory issues, try closing other applications or consider using a smaller model variant.
  3. Slow responses: This is normal, especially on less powerful hardware. Be patient, or consider upgrading your GPU if you need faster performance.

Remember, the Ollama community is quite helpful, so don't hesitate to reach out if you're stuck!

Testing Deepseek R1 API with Apidog

Now that we've got Deepseek R1 up and running locally, let's take it a step further and test its API capabilities using Apidog. This powerful tool will help us ensure our local setup is working correctly and allow us to explore the model's full potential.

button
💡
Apidog simplifies API management—document, test, and automate in one platform. Try it with DeepSeek API today!
Your First API Call - DeepSeek API
Your First API Call - DeepSeek API

Create a New Project:

Add API Endpoints:

Define Request Body:

Send the Request:

Review the Response:

If everything went well, you should see a successful response with Deepseek R1's explanation of machine learning!

Conclusion: Your Local AI Journey Begins Now

Running DeepSeek R1 locally with Ollama offers a powerful solution for integrating advanced AI capabilities into your applications. By following the steps outlined in this guide, you can set up, configure, and test the model effectively. Additionally, utilizing tools like Apidog enhances your ability to develop, test, and document APIs efficiently, streamlining the development process.

button

Explore more

How to Use Deepseek R1 Locally with Cursor

How to Use Deepseek R1 Locally with Cursor

Learn how to set up and configure local DeepSeek R1 with Cursor IDE for private, cost-effective AI coding assistance.

4 June 2025

How to Run Gemma 3n on Android ?

How to Run Gemma 3n on Android ?

Learn how to install and run Gemma 3n on Android using Google AI Edge Gallery.

3 June 2025

How to Use Google Search Console MCP Server

How to Use Google Search Console MCP Server

This guide details Google Search Console MCP for powerful SEO analytics and Apidog MCP Server for AI-driven API development. Learn to install, configure, and leverage these tools to boost productivity and gain deeper insights into your web performance and API specifications.

30 May 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs