https://youtu.be/AHGyGeEhRZs?si=6kWou7t4phzMa0BN
<aside> 👉 These are the steps as described in the YouTube video, as of January 2025
</aside>
ollama run deepseek-r1:7b
/bye
when you want to leaveollama stop deepseek-r1:7b
if you want to stop the modelollama run deepseek-r1:1.5b
python -m venv .venv
source ./.venv/bin/activate
pip install ollama
touch ask_deepseek.py
import ollama
response = ollama.chat(
model="deepseek-r1:1.5b",
messages=[
{"role": "user", "content": "Why is the sky blue?"}
],
options={"temperature": 0.7}
)
print(response["message"]["content"])
<aside> 👉 Ollama is also OpenAI API compatible and you can change the base url on your application to http://localhost:11434/v1
</aside>
<aside> 👉
To call DeepSeek’s largest model, see https://api-docs.deepseek.com/
</aside>