Hey all! I want to start testing neuro-symbolic AI vs. LLM’s and want to know how to get into this. As I understand it, Claude Code, does this, but are there ways to use it locally?
How does it work under the hood? I know LLM’s involve tokens, embeddings, weights and transformers. How does the symbolic part of it change it?
Thanks!


Thanks! Have you ever used this? I’m also seeing another logic language called Scallop.
I’ve recently started running models locally with llama.cpp, but this seems like a whole other setup.
There are a lot of links on the github page to the projects doc, setup, demo, discussions etc.