π Deploying DeepSeek-R1 Locally with Ollama
Ollama simplifies running large language models (LLMs) on your local machine by handling model downloads, optimization, and seamless deployment.

Visit the official Ollama website to download and install Ollama like any other application.

Before downloading and running DeepSeek-R1, you can check the list of available models supported by Ollama using:
ollama list
This command displays the models you have already downloaded or that are available for use.
Run the following command in your terminal to download and launch DeepSeek-R1:
ollama run deepseek-r1
If you want to use a specific model size, replace Xb in the command below with the desired size (1.5b, 7b, 8b, 14b, 32b, 70b, 671b):
ollama run deepseek-r1:Xb
To keep DeepSeek-R1 running as a background service and enable API access, start the Ollama server:
ollama serve
This allows seamless integration with other applications.
Once installed, interact with DeepSeek-R1 directly from your terminal.
Use cURL to send API requests:
curl http://localhost:11434/api/chat -d '{
"model": "deepseek-r1",
"messages": [{ "role": "user", "content": "What is 25 * 25?" }],
"stream": false
}'
This command sends a request to the DeepSeek-R1 API and retrieves the response.
First, install the ollama Python package:
pip install ollama
Then, use the following Python script to communicate with the model:
import ollama
response = ollama.chat(
model="deepseek-r1",
messages=[
{"role": "user", "content": "Explain Newton's Second Law."},
],
)
print(response["message"]["content"])
This script sends a query to the model and prints the response.
π Now youβre all set to run DeepSeek-R1 locally with Ollama! π