Introduction
In Why You Should Try a Local LLM Model—and How to Get Started, I introduced how to set up a local LLM model using LM Studio.
In this article, I will show you how to chat with a local LLM model in Obsidian.
Method
Obsidian’s Copilot plugin allows you to connect to a custom model, enabling you to use AI-generated insights directly within your markdown workspace. Here’s a step-by-step guide to setting it up.
Step 1: Install the Copilot Plugin in Obsidian
- Open Obsidian and go to Settings > Community Plugins.
- Enable Community Plugins if you haven’t already.
- Search for CopilotCopilot in the plugin library and click Install.
- Once installed, enable the plugin and access its settings via Settings > Copilot.
Step 2: Add Your Local LLM Model in Copilot
- In the Provider field, select lm-studio.
- Enter the Model Name.
- Click
Verify Connection
to ensure that Copilot can communicate with the model.
Once the connection is verified successfully, Click Add Model
. Your custom model will now appear in the list of available models.
Step 3. Enable the Custom Model in General Settings
- In Copilot > General Settings, Select the custom model you just added.
- make sure to enable CORS in LM Studio and Copilot
Step 4: open chat box
Press Ctrl + Shift + P(Windows)or Cmd + Shift + P(Mac) and select Open Copilot Chat Window
。
Now you can chat with your local LLM model in Obsidian's sidebar.
Conclusion
This is a simple way to chat with your local LLM model in Obsidian.
Explore more
Thank you for taking the time to explore data-related insights with me. I appreciate your engagement.
Top comments (0)