Ollama is an open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine.
https://ollama.com/
chat mode server mode
# Ollama
user env:
- OLLAMA_MODELS
- OLLAMA_HOST 0.0.0.0
- OLLAMA_ORIGIN *
Note: 如果系统变量不行,改成用户变量,Ollama prioritizes user environment variables over system ones when loading model paths; meaning the system-wide setting is being ignored unless explicitly configured otherwise.
Port: 11434
Log path: \Users\meesi\AppData\Local\Ollama\Server.log
Models:
- Deepseek
- 越狱版huihui_ai/deepseek-r1-abliterated (opens new window)
ollama run huihui_ai/deepseek-r1-abliterated
- 越狱版huihui_ai/deepseek-r1-abliterated (opens new window)
- other
# ollama+ AnythingLLM
0基础!一行代码部署Gemma!纯本地!主打一个快!不用搞依赖!7B尺寸超13B性能!附模型下载! https://mp.weixin.qq.com/s/4hjewv3TFI5fqe66PaT6Tg