Running DeepSeek R1 locally

See my post here:
https://forum.exetools.com/showpost.php?p=132646&postcount=1

Running DeepSeek R1 locally
DeepSeek has its flagship V3 model equivalent to GPT4 and it’s reasoning model R1 freely accessible:
https://www.deepseek.com/
AI training for 5.6 million USD exceeding the quality of 100 mil to 1 bil USD. Inference of high quality is within reach of your own local environment where your data stays private. I’ve found their models better at reasoning than OpenAIs significantly. It’s quite exciting and I’m surprised noone has brought the topic up yet, given the large amount of use cases for reverse engineering and the very low cost.

If you have a GPU recommend OLlama which works on Windows, Linux and Mac (can also rin Facebook/Meta’s Llama models):

https://ollama.com/
Can choose from the models listed here:

https://ollama.com/library/deepseek-r1
. 8b is pretty lightweight but if you have a recent Nvidia GPU with a lot of RAM why not go for 32b.

For a frontend chat interface, I recommend Chatbot AI:
https://chatboxai.app
Even better the V3 and R1 model are open source and you can do your own model finishing if you have the resources.

R1:
https://github.com/deepseek-ai/DeepSeek-R1
V3:
https://github.com/deepseek-ai/DeepSeek-V3