Hey all! I want to start testing neuro-symbolic AI vs. LLM’s and want to know how to get into this. As I understand it, Claude Code, does this, but are there ways to use it locally?

How does it work under the hood? I know LLM’s involve tokens, embeddings, weights and transformers. How does the symbolic part of it change it?

Thanks!

  • venusaur@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Thanks! Have you ever used this? I’m also seeing another logic language called Scallop.

    I’ve recently started running models locally with llama.cpp, but this seems like a whole other setup.

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      There are a lot of links on the github page to the projects doc, setup, demo, discussions etc.