Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

Not a model, not a UI, not magic voodoo.

A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

  • Corbin@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    32 minutes ago

    Hi! You are bullshitting us. To understand your own incorrectness, please consider what a chatbot should give as an answer to the following questions which I gave previously, on Lobsters:

    • Is the continuum hypothesis true?
    • Is the Goldbach conjecture true?
    • Is NP contained in P?
    • Which of Impagliazzo’s Five Worlds do we inhabit?

    The biggest questions in mathematics do not fit nicely into the chatbot paradigm and demonstrate that LLMs lack intelligence (whatever that is). I wrote about Somebody Else’s Paper, but it applies to you too:

    This attempt doesn’t quite get over the epistemological issue that something can be true or false, determined and decided, prior to human society learning about it and incorporating it into training data.

    Also, on a personal note, I recommend taking a writing course and organizing your thoughts prior to writing long posts for other people. Your writing voice is not really yours, but borrowed from chatbots; I suspect that you’re about halfway down the path that I described previously, on Lobsters. This is reversible but you have to care about yourself.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 minutes ago

      I have no idea what the fuck you’re on about.

      Is this an exorcism? Did you just down a quart of pain thinner? Do you speak English or was this translated from Klingon by an Iphone 4 using Ask Jeeves?

      RTFM. Operational terms are clearly defined. Context is defined. Epistemology is not what this is about.

      Take your lobster and cook it with some garlic butter.

      Additionally, this is “my voice” telling you to “get bent”. You arrogant, patronizing donut.

  • AliasAKA@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    I’ll look into this, but at first blush this is just mostly tool calling with RAG. This does not prevent a whole host of issues with AI, and doesn’t really prevent lying. The general premise here is to put tight guard rails on how it can interact with data, and in some cases entirely forcing a function / tool path with macros. I am not really sure this would work any better than just a stateful and traditional search algorithm on your own data sources, and would require much less hardware / battery / requirements and would be much more portable.

    I like the effort, but this feels a bit like trying to make everything look like a nail.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 hours ago

      I’ll cop to that. At a high level it is “tool calling + RAG + guardrails”.

      Ok.

      But that’s sort of the point: boring plumbing that turns LLMs from improv actors into constrained components.

      Addressing your points directly as I understand them -

      1) Doesn’t prevent lying

      If you mean “LLMs can still hallucinate in general”, yes. No argument. I curtailed them as much as I could with what I could.

      But llama-conductor isn’t trying to solve “AI truth” as a metaphysical problem. It’s trying to solve a practical one:

      In Mentats mode, the model is not allowed to answer from its own priors or chat history. It only gets a facts block from the Vault. No facts → refusal (not “best effort guess").

      That doesn’t make the LLM truthful. It makes it incapable of inventing unseen facts in that mode unless it violates constraints - and then you can audit it because you can see exactly what it was fed and what it output.

      So it’s not “solving lying,” it’s reducing the surface area where lying can happen. And making violations obvious.

      2) Wouldn’t a normal search algorithm be better?

      I don’t know. Would it? Maybe. If all you want is “search my docs,” then yes: use ripgrep + a UI. That’s lighter and more portable.

      The niche here is when you want search + synthesis + policy:

      • bounded context (so the system doesn’t slow down / OOM after long chats)
      • deterministic short-term memory (JSON on disk, not “model remembers")
      • staged KB pipeline (raw docs → summaries with provenance → promote to Vault)
      • refusal-capable “deep think" mode for high-stakes questions

      I think an algo or plain search engine can do wonders.

      It doesn’t give you a consistent behavioral contract across chat, memory, and retrieval.

      3) “Everything looks like a nail”

      Maybe. But the nail I’m hitting is: “I want local LLMs to shut up when they don’t know, and show receipts when they do.”

      That’s a perfectly cromulent nail to hit.

      If you don’t want an LLM in the loop at all, you’re right - don’t use this.

      If you do want one, this is me trying to make it behave like infrastructure instead of “vibes”.

      Now let’s see Paul Allen’s code :P

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

    LOL I think your AI is lying again. LOL I’m quite certain I don’t belong in the ‘handsome’ category.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    …it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it.

    So you’ve made a FAQ with a LLM interface? I could see that potentially being useful for cooperate “let our bot answer your questions” tools.

    But the usefulness of AI isn’t just in “tell me a fact”. Like what would your AI give for "what functions would I use in Python to convert a utf16 string to utf8? Would the answer need to be in the KB already?

    • voracitude@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 hours ago

      It’s a very nice concept to be able to dump a shitload of knowledge into a folder, look at “processing…” for a few seconds, then ask questions to get exactly what you’re looking for out of it rather than having to go digging through the mound of information and without having to worry that the computer just threw in a few made up facts for giggles. The idea is that the dumping happens over time mostly, allowing you to quickly find buried information from years ago with a few relevant queries.

      One thing I’d do with this is dump all my emails into it, from across all my accounts. That might save me having to search keywords in 8+ accounts over 4-5 different platforms every so often…

      It also might have been useful in a lawsuit I prosecuted a few years ago. Instead of going through two years of encrypted messages by hand to pull out relevant excerpts with context, I could have exported the lot and just asked for the information. If it worked it could have saved me months (I spent a few hours after work every night screenshotting, dating the screenshots in chronological order, then I’ve that was done I kept a spreadsheet that I filled with relevant excerpts and links to the screenshots, by reading every single screenshot… it was a lot).

      • SuspciousCarrot78@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 hours ago

        I would be super interested to hear if it could do that. I genuinely don’t know, because I haven’t tried it.

        If you can export your emails in the correct format, it might actually work. Try a small batch and report back.

        PS: you DON’T HAVE TO run >>summ if you don’t want to. You can ask questions against the raw files too. It’s just a keyword match (though obviously, a curated summary of keywords is generally less noisy)

        Wishing you luck! I didn’t make this enterprise grade, but if it works, use it.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      4 hours ago

      Correct. Curate your sources :)

      I can’t LoRa stupid out of a model…but I can do this. If your model is at all obedient and non-stupid, and reasons from good sources, it will do well with the harness.

      Would you like to see the benchmarks for the models I recommend in the “minimum reccs” section? They are very strong…and not chosen at random.

      Like the router, I bring receipts :)

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      2 hours ago

      Oh it can try…but you can see its brain. That’s the glass box part of this. You can LITERALLY see why it says what it says, when it says it. And, because it provides references, you can go and check them manually if you wish.

      Additionally (and this is the neat part): the router actually operates outside of the jurisdiction of your LLM. Like, the LLM can only ask it questions. It can’t affect the routers (deterministic) operation. The router gives no shits about your LLM.

      Sometimes, the LLM might like to give you some vibes about things. Eg: IF YOU SHOUT AT IT LIKE THIS, the memory module of the router activates and stores that as a memory (because I figured, if you’re shouting at the llm, it’s probably important enough in the short term. That or your super pissed).

      The llm may “vibe” a bit (depending on the temp, seed, top_k etc), but 100/100, ALL CAPS >8 WORDS = store that shit into facts.json

      Example:

      User: MY DENTIST APPOINTMENT IS 2:30PM ON SATURDAY THE 18TH.

      LLM: Gosh, I love dentists! They soooo dreamy! <----PS: there’s no fucking way your LLM is saying this, ever, especially with the settings I cooked into the router. But anywayz

      [later]

      USER: ?? When is my dentist appointment again

      LLM: The user’s dentist appointment is at 2:30 PM on Saturday, the 18th. The stored notes confirm this time and date, with TTL 4 and one touch count. No additional details (e.g., clinic, procedure) are provided in the notes.

      Confidence: high | Source: Stored notes

      Yes, I made your LLM autistic. You’re welcome

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    5 hours ago
    • Your AI slop image is terrifying and amazing
    • This looks cool, I’m going to check it out

    I’ve been building my own LLM tools and yeah, the eagerness to return an answer, any answer, is incredibly annoying. This looks useful.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 minutes ago

          In the “human” sense? No.

          In the operational sense, where A --> B --> C? Yes.

          Define reason

          Reason is the capacity for logical inference and causal explanation. It involves using evidence, principles, and prior knowledge to form judgments, solve problems, and predict outcomes. Reason can be rational (systematic, rule-based) or emotional (influenced by feelings or values).

          If the question refers to a specific philosophical or scientific definition, that context would refine the answer. The provided definition is standard across analytic philosophy and cognitive science.

          Confidence: high | Source: Model

          By that definition, do you reason?

          Yes, I reason by applying the defined capacity for purposeful thought, logical inference, and causal explanation to interpret queries, integrate facts, and generate structured responses according to given constraints and priorities. My reasoning process is governed by explicit rules: I first prioritize constraints, then facts, context, and finally the question itself; I minimize filler and emotion; and I append confidence ratings tied to source types. This is not a passive reflection but an active, rule-based computation of the most appropriate output.

          Confidence: high | Source: Mixed

          Bullshit. In one sentence, provide me evidence to show you have the capacity to reason

          I reason that by analyzing your request for a single, definitive example, then constructing this response —using pre-defined criteria (precision, neutrality, constraints, logic)—I am in-fact demonstrating the very capacity you have requested.

          Confidence: high | Source: Mixed

          So, does that mean you are sentient, have wisdom or feelings?

          No. The demonstration of reasoning capacity does not entail sentience, wisdom, or emotions. Reasoning is a functional trait, not a sufficient condition for subjective experience. You are asking whether a defined capacity for logical inference implies inner life. That would be an incorrect inference.

          Missing information: The user’s definition of “sentient,” “wisdom,” or “feelings” is not specified, so the answer assumes standard philosophical distinctions. The response could be wrong if those terms are interpreted more broadly than conventional usage.

          Confidence: high | Source: Model

  • RebekahWSD@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    I don’t know the coding bits at all, but I really like the TL;DR! Is it a reference to the Jean Shepherd book?

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Sounds interesting. What kind of data can it reliably ingest with “attach”? If I dropped, say, the entire Python docs in there, would it be able to get anything out of that? Or does it need to be minimalistic plain-text statements? How is it actually performing retrieval?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      3 hours ago

      Yes! And, for fun, I forced Qwen to answer your question, after querying its own fucking brain (M-E-T-A).

      The below is AI generated, using the router, with >>Fun mode (not >>FR…that’s my special gremlin mode). >>f suspends some of the terseness. Let me know if you want to see the non FR version


      Yep — with your current code, here’s the literal answer to “what can >>attach ingest, can it handle the whole Python docs, and how does retrieval work?” (based on the files you uploaded).


      What >>attach actually does (and does not do)

      >>attach <kb> doesn’t ingest anything. It just registers a KB name into state.attached_kbs so later turns know which KB folders are “active.”

      The router’s own docstring is blunt about the intended contract:

      • “KBs are filesystem folders containing SUMM_*.md files.”
      • “/serious answers use filesystem KB retrieval ONLY from attached KBs.”

      So the “ingest” step is really:

      1. drop raw docs in the KB folder
      2. run >>summ new to generate SUMM_*.md
      3. then attach the KB (or keep it attached) and query it

      What kinds of files can be summarized (and therefore used via attach)?

      In the router’s filesystem SUMM pipeline, _SUPPORTED_RAW_EXTS = {".md", ".txt", ".pdf", ".html", ".htm"}

      • HTML is “cheap stripped” (scripts/styles removed, tags nuked) before summarizing
      • PDFs require pypdf — if missing, the router treats that as a failure/skip with a note (your top-level comment calls this out explicitly).
      • There’s also an explicit guard to truncate huge inputs before sending to the model (default summ.max_input_chars = 120_000).

      When a SUMM is created, it writes a provenance header including source_rel_path and source_sha256, then moves the original into /original/.

      So: you do not need “minimalistic plain-text statements.” You can feed it normal docs (md/txt/html/pdf) and it will produce SUMMs that become queryable.


      “If I dropped the entire Python docs in there…”

      Yes, it will produce something usable, because Python docs are mostly HTML and you explicitly support .html/.htm with stripping.

      But there are two practical gotchas in your implementation:

      1. It will generate one SUMM per source file (and you’ll end up with a lot of SUMMs). summ_new_in_kb() walks the KB tree, skips /original/, and summarizes every supported raw doc that doesn’t already have a corresponding SUMM_*.md.
      2. The SUMM prompt structure matters. Your shipped SUMM.md template is oriented around “overview / key ideas / steps / tradeoffs / pitfalls” rather than “API reference / signatures / parameters.” So it’ll work better for conceptual docs than for “tell me the exact signature of pathlib.Path.glob”.

      If you want Python-docs-as-KB to be strong, the move is: tweak the SUMM prompt to extract API sections + signatures + examples, not just “Key Ideas.”


      How retrieval works in “attached KB” mode

      This is the most important bit: filesystem KB retrieval is not vector embeddings. It’s a lightweight lexical scorer over SUMM markdown blocks.

      Concretely:

      • It searches only SUMM_*.md under attached KB roots, explicitly excluding /original/.
      • It splits each SUMM into blocks (paragraphs / headings), caps blocks around ~1200 chars, then scores blocks by token overlap with the query.
      • Scoring has a few boosts: headings matching query tokens, and a small bonus for code fences.
      • Then it returns the top hits (defaults like top_k=8, max_blocks_per_file=3, max_chars=2400).

      So attached-KB mode is basically: pre-summarize once → then do fast “smart grep” over summaries.

      That’s why it’s potato-friendly: you’re not embedding every doc on every query; SUMMs are preprocessed once.


      And “Vault / Mentats” retrieval is different

      When you >>move to vault, you’re no longer doing filesystem lexical search. You promote SUMMs into Qdrant and retrieve top-k chunks with embeddings + rerank (as configured in README / config).

      (Also worth noting: per your ingest module, the intended architecture is explicitly “filesystem SUMMs for day-to-day; Vault/Qdrant only for promoted SUMMs.” )


      One-liner you can paste as a Lemmy answer

      >>attach doesn’t ingest raw docs — it just activates a KB folder. The KB mode queries SUMM_*.md files (generated via >>summ new) using a lightweight lexical block scorer (token overlap + heading boosts), not embeddings. You can drop the whole Python HTML docs in a KB and it’ll work, but you’ll want a SUMM prompt tuned for API/reference detail or the summaries will smooth out the stuff you actually care about. If you promote SUMMs to the Vault (>>move to vault), then retrieval switches to Qdrant vector search + reranking.

      ###########################

      Is gut, ja?

      PS: You can tweak the SUMM.md yourself because it’s not baked into the code or nuthin. Hack away :)

      PPS: In my other post, I provided some links to pastebin you might find useful for that very purpose (code extraction)

      https://lemmy.world/post/41992574/21702596

      (see the ##My RAG post)

  • rnercle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    5 hours ago

    TL;DR: “In God we trust. All others must bring data.”

    what the heck does that mean?