I keep a lot of notes in markdown files, and I’d like an LLM to assist.

I regularly use Open WebUI with with inference routed through huggingface. Open WebUI kind of has this functionality like you can upload a markdown file and prompt it to improve it in whatever way, but of course that’s a fairly clunky workflow.

I really want something built into the editor, that can use RAG to consider other files in context.

I also don’t want to be locked in to a specific LLM or provider, I’d like to be able to link it to OpenRouter or similar.

  • Rag is an outdated mechanism full agentic workflow is much better imo. I’ve written my own custom thing that uses a matrix account, pi, a vector embedder via local ollama, and vector store of chroma, the agent has custom tools to query the vector store, run bash etc. I have my logseq notes sync to my server via syncthing and I have a file watcher that updates the vector store as my notes change. The agent can edit notes like its any file. I then simply have a matrix client I can communicate to the agent with. I have it so the fuel what her looks for “/sydney” (that’s my agents name) and it will send a message via matrix to get the agent to go look at that file/note and make changes as requested via the command. Its kinda openclawish but a lot less context heavy and doesn’t run forever unless triggered.