If you're looking for an AI that excels in reasoning and is also free because it's open source, the newly launched DeepSeek R1 is a great choice. I...
For further actions, you may consider blocking this person and/or reporting abuse
Why would you advise to use an extreamly bad version like the 7B parameters one? If you're a software developer just use the closed source LLMs or DeepSeek R1 and pay for it's API instead of getting really bad results with the local LLM smh
In the article I mention the option of paying for the API. I wouldn't pay if I had a machine with excellent hardware, which is one of the aims of this article. I think it's worthwhile for users to test it out locally first, as everyone has different needs.
Bolt.Diy if you want local development and pay for DeepSeek API key and develop locally is a good option.
Not totally free but the API costs is so low it doesn't make sense to house it locally.
Consider the costs with the token input/output for R1 millions for fractions of a penny. $2 load would last you for weeks or months.
"..pay for it's API instead of getting really bad results with the local LLM smh"
I already pay my high end PC with super hardware inside, now I want to f*ck it with AI, squeeze every memory installed and flex the GPU as well.
Agreed, the 7b isn't really good for coding and the 8b is worse (it's distilled with llama instead of Qwen like the 7b)
Smaller models work best with aider and somewhat with bolt.diy too if you know what you're doing and prompt them properly(but with limited tasks and in small codebase, hence work best with aider). They tend to loose context of what they did earlier and start hallucination very fast and get stuck.
I've tried all r1 models(qwen versions) upto 14b locally(via ollama and lm studio) on my gaming desktop and 32b via HF inference(free serverless api) with cline, roo-code, aider and Bolt.diy. Absolutely useless in cline/rooCline. 14b and 32b are usable with aider if you generate proper instruction, roadmap using powerful models for every phase and tasks. Also tried phi-4 recently, surprisingly okay for such a small model in it's tier!
Having same experience with DS-7B, you mention that aider works better for 7B?
I just been playing a bit with deepseek-r1:7B ollama + aider and works quite decently specially with the --watch-files flag:
aider --model ollama_chat/deepseek-r1:7b --watch-files
ok my best results so far are with:
aider --model ollama_chat/deepseek-r1:7b --watch-files --browser
it enables a GUI to explore files and edit files, a bit friendlier than the command line for multiple files.
I started coding with aider today. I'm impressed with it. Maybe I'll write about my experiences so far.
They have 671b model,we need to have work station for it 😅🥲
whoa, that 671b needs things like Ryzen Threadripper and 6x RTX A6000 to run smoothly on consumer Workstation PC
but since it is DeepSeek it must requires less resource than the GPT 4o
Yes, hahaha, it always depends on your needs.
7b is totally incompatible with roo or cline I would suggest to use continue.dev which gives slightly better results but only caveat is that it does not support automatic command running or files creation or edits.
Nice article, I used the 32b version to write some python, it doesn't best Claude at all, but I feel it is very useful. Openwebui is a dream, but cline should be better, however some tasks deepseek R1 just doesn't understand.
I need to do more research on Open WebUI, I confess I'm still not familiar with it yet.
You would not write this article as is if you tested DeepSeek R1 locally with Cline yourself or maybe your test case was super simple.
I tested it with 7B parameters one and the one distilled with Qwen. And, it is really bad for any more or less complex coding task!
Maybe DeepSeek R1 is good for chatting or "reasoning" or whatever, but not for software development.
It is not capable of understanding technical requirements, nor to refine the requirements, nor to architect a solution. It is simply bad, specially when you compare it to Claude!
Thank you very much for your comment and feedback.
I tested with Cline, Deepseek models 1.5B, 7B and 8B. I've had to adapt to the limitations, my hardware and my needs. When I'm looking for something more accurate, I use API tokens.
The article mentions the paid API option, the intention is to show the process of running locally, understanding, adjusting and adapting it to each person's reality.
Article is really nice. Would you advise same on Nvidia nano products as well.
I haven't tested any NVIDIA AI products yet, thanks for your message.
I have tried 7b with LM Studio with cline in vscode, even a simple prompt takes too longer to respond,
Try download a lightweight model. If results are not so good for your needs I recommend pay for tokens and use their API key. I wish the best for you.
are you trust in run a chinese model? it's can be a spyware.