DEV Community

re-ten
re-ten

Posted on

DeepSeek-R1 on Cursor with Ollama

So guys, there are many options using local llm but the DeepSeek-R1 is drop a weeks ago. If you want use the ollama/local llm in cursor i got u.

First u need a ollama, Ollama then after installing it u need cors for ollama, it required or cursor give 403 Forbidden as u can see we need define OLLAMA_ORIGINS in windows environment.

Image description

Ok, next we need the deepseek-r1 models, i try deepseek-r1:8b because this model have good benchmark the model running on my pc with Nvidia RTX 3070 8GB(enough vram i got 60-70t/s). We can use

ollama run deepseek-r1:8b

then the models start the downloading, if that clear we can quit the ollama via tray icons windows or what ever, we need to close for restarting ollama because we define the cors.

Image description

then u can run ollama via start menu.

Image description

By default, ollama serve endpoint http://127.0.0.1:11434 but if u direct using the endpoint to cursor i cant be used. so we need ngrok. U can download and login it, then they instruct u to login via auth token.

Next we need ngrok to give public url for ollama.
.\ngrok.exe http 11434 --host-header="localhost:11434"

Like this
Image description

Then we got the endpoint for OpenAI Public URL

Image description

U can check if ur endpoint is active

Image description

Ok, we move to cursor

Image description
We need define model what we use in cursor, u can check with ollama list for list of models u have.

Image description
On OpenAI Key use ur public url https://xxxxxx.ngrok-free.app with api key ollama the u done.

If the step done, we can go try some model with cursor chat.

Image description
As u can see, the local llm works properly at some case it not support for compose because cursor only allow antrophic and gpt models.

Top comments (0)