Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mocking

API Automated Testing

How to Run Deepseek R1 Locally Using Ollama ?

Learn how to run DeepSeek R1 locally using Ollama in this comprehensive guide. Discover step-by-step instructions, prerequisites, and how to test the API with Apidog.

Ashley Innocent

Ashley Innocent

Updated on January 21, 2025

Are you excited to hear all about the Deepseek R1 model? It's pretty awesome, now you can easily run it on your machine. So, it's time to buckle up!

💡
Ready to supercharge your API testing? Download Apidog for free and take your Deepseek R1 experiments to the next level! With Apidog, you can easily create, test, and document your APIs, making it the perfect companion for your local AI adventures. Don't miss out on this powerful tool that'll streamline your workflow and boost your productivity!
button

What is deepseek R1 ?

DeepSeek R1 is a cutting-edge reasoning model that has garnered attention for its performance across various tasks, including mathematics, coding, and logical reasoning. Running this model locally offers several advantages, such as reduced latency, enhanced privacy, and greater control over your AI applications. Ollama, a versatile tool, facilitates the deployment and execution of such models on local machines.

DeepSeek R1: The Best Open-Source Reasoning LLM Challenging OpenAI’s o1
What is DeepSeek R1 and Why Should You Care? Alright, let’s get down to business. DeepSeek R1 is the brainchild of DeepSeek AI , a Chinese AI research lab that’s been quietly working on something extraordinary.

Prerequisites: Setting the Stage

Alright, let's make sure we've got everything we need before we start this adventure:

  1. A computer with a decent CPU and GPU (the beefier, the better!)
  2. Ollama installed on your system
  3. Some basic command-line knowledge
  4. A thirst for AI knowledge (which I'm sure you've got in spades!)

Setting Up Ollama

Ollama streamlines the process of running AI models locally. To set it up:

Download Ollama:

  • Visit the Ollama website and download the version compatible with your operating system.

Install Ollama:

  • Follow the installation instructions provided on the website.

Verify Installation:

  • Open your terminal and run:ollama --version
  • This command should display the installed version of Ollama, confirming a successful installation.

Step-by-Step Guide to Running Deepseek R1 Locally

Step 1: Downloading the Deepseek R1 Model

First things first, we need to get our hands on the Deepseek R1 model. Luckily, Ollama makes this super easy. Open up your terminal and type:

ollama run deepseek-r1

This command tells Ollama to download the  billion parameter version of Deepseek R1. Sit back and relax while it does its thing – depending on your internet speed, this might take a while. Maybe grab a coffee or do some stretches?

Step 2: Verifying the Installation

Once the download is complete, let's make sure everything is in order. Run this command:

ollama list

You should see deepseek-r1:8b in the list of available models. If you do, give yourself a pat on the back – you're halfway there!

Step 3: Running Deepseek R1

Now for the moment of truth – let's fire up Deepseek R1! Use this command:

ollama run deepseek-r1

And just like that, you're conversing with one of the most advanced AI models out there, right from your own computer. How cool is that?

Step 4: Interacting with Deepseek R1

Once the model is running, you'll see a prompt where you can start typing. Go ahead, ask it something! Here are a few ideas to get you started:

  • "Explain quantum computing in simple terms."
  • "Write a short story about a time-traveling cat."
  • "What are the potential implications of artificial general intelligence?"

Feel free to get creative – Deepseek R1 is quite versatile!

Advanced Usage: Customizing Your Experience

Now that you've got the basics down, let's explore some advanced features to really make the most of your local Deepseek R1 setup.

Using Deepseek R1 in Your Projects

Want to integrate Deepseek R1 into your Python projects? Ollama's got you covered! Here's a quick example:

import ollama

response = ollama.chat(model='deepseek-r1', messages=[
    {
        'role': 'user',
        'content': 'Explain the concept of recursion in programming.',
    },
])

print(response['message']['content'])

This opens up a world of possibilities for AI-powered applications right on your local machine!

Troubleshooting Common Issues

Even the smoothest journeys can hit a few bumps, so let's address some common issues you might encounter:

  1. Model not found: Double-check that you've successfully pulled the model using ollama pull deepseek-r1b.
  2. Out of memory errors: Deepseek R1 is a hefty model. If you're running into memory issues, try closing other applications or consider using a smaller model variant.
  3. Slow responses: This is normal, especially on less powerful hardware. Be patient, or consider upgrading your GPU if you need faster performance.

Remember, the Ollama community is quite helpful, so don't hesitate to reach out if you're stuck!

Testing Deepseek R1 API with Apidog

Now that we've got Deepseek R1 up and running locally, let's take it a step further and test its API capabilities using Apidog. This powerful tool will help us ensure our local setup is working correctly and allow us to explore the model's full potential.

button

Create a New Project:

  • In Apidog, click on "New Project" and provide a name for your project.

Add API Endpoints:

  • Click on the "New request" button to add a new API endpoint.
  • Enter the API endpoint URL provided by your DeepSeek R1 model and Specify the HTTP method (e.g., POST) and any necessary headers or authentication details.

Define Request Body:

  • If your API requires a request body, navigate to the "Body" tab.
  • Select the appropriate format (e.g., JSON) and input the required parameters.

Send the Request:

  • Click on the "Send" button to execute the API request.

Review the Response:

  • Examine the response status code, headers, and body to ensure the API is functioning as expected. Apidog provides tools to validate responses against expected outcomes, aiding in comprehensive testing.

If everything went well, you should see a successful response with Deepseek R1's explanation of machine learning!

Conclusion: Your Local AI Journey Begins Now

Running DeepSeek R1 locally with Ollama offers a powerful solution for integrating advanced AI capabilities into your applications. By following the steps outlined in this guide, you can set up, configure, and test the model effectively. Additionally, utilizing tools like Apidog enhances your ability to develop, test, and document APIs efficiently, streamlining the development process.

button

How to Build and Document RESTful APIs with Flask-RESTX and ApidogTutorials

How to Build and Document RESTful APIs with Flask-RESTX and Apidog

Learn how to build RESTful APIs with Flask-RESTX, validate and serialize data, and generate interactive API documentation. Explore Apidog for seamless API testing and documentation management. Download Apidog for free and enhance your API development process today

Ashley Innocent

January 15, 2025

Appium Testing Automation TutorialTutorials

Appium Testing Automation Tutorial

Learn how to master mobile app testing automation with our comprehensive Appium tutorial. This guide covers setting up your environment, writing test scripts, advanced features, and integrating API testing using Apidog. Enhance your testing strategy and deliver robust applications with ease.

Ashley Innocent

January 13, 2025

Reverse Engineering APIs: Guide, Tools & TechniquesTutorials

Reverse Engineering APIs: Guide, Tools & Techniques

Reverse engineering APIs can help developers integrate with undocumented or third-party systems. Discover the tools, benefits, and steps to reverse engineer APIs, including real-world examples with Proxyman and Apidog for capturing and debugging API traffic.

Oliver Kingsley

January 8, 2025