Local deployment

πŸš€ Deploying DeepSeek-R1 Locally with Ollama

🌟 Introduction

Ollama simplifies running large language models (LLMs) on your local machine by handling model downloads, optimization, and seamless deployment.

πŸ› οΈ Step 1: Install Ollama

Visit the official Ollama website to download and install Ollama like any other application.

πŸ” Step 2: Check Available DeepSeek-R1 Models

Before downloading and running DeepSeek-R1, you can check the list of available models supported by Ollama using:

ollama list

This command displays the models you have already downloaded or that are available for use.

πŸ“₯ Step 3: Download & Run DeepSeek-R1

Run the following command in your terminal to download and launch DeepSeek-R1:

ollama run deepseek-r1

If you want to use a specific model size, replace Xb in the command below with the desired size (1.5b, 7b, 8b, 14b, 32b, 70b, 671b):

ollama run deepseek-r1:Xb

⚑ Step 4: Run DeepSeek-R1 in the Background

To keep DeepSeek-R1 running as a background service and enable API access, start the Ollama server:

ollama serve

This allows seamless integration with other applications.

πŸ’‘ Using DeepSeek-R1 Locally

πŸ–₯️ 1. Perform Inference via CLI

Once installed, interact with DeepSeek-R1 directly from your terminal.

🌐 2. Access DeepSeek-R1 via API

Use cURL to send API requests:

curl http://localhost:11434/api/chat -d '{
  "model": "deepseek-r1",
  "messages": [{ "role": "user", "content": "What is 25 * 25?" }],
  "stream": false
}'

This command sends a request to the DeepSeek-R1 API and retrieves the response.

🐍 3. Interact with DeepSeek-R1 in Python

First, install the ollama Python package:

pip install ollama

Then, use the following Python script to communicate with the model:

import ollama

response = ollama.chat(
    model="deepseek-r1",
    messages=[
        {"role": "user", "content": "Explain Newton's Second Law."},
    ],
)

print(response["message"]["content"])

This script sends a query to the model and prints the response.

πŸŽ‰ Now you’re all set to run DeepSeek-R1 locally with Ollama! πŸš€