Hey all! I want to start testing neuro-symbolic AI vs. LLM’s and want to know how to get into this. As I understand it, Claude Code, does this, but are there ways to use it locally?
How does it work under the hood? I know LLM’s involve tokens, embeddings, weights and transformers. How does the symbolic part of it change it?
Thanks!


I usually start with the Wikipedia Article when I’m interested in new things. It’ll have many references at the bottom to read more about a concept.
Interestingly enough, there’s zero mention of Claude in there. And when I google it, there’s many very convoluted blog posts. And I can’t tell whether it’s above my head or hallucinated stories. They go on for like 20 pages but don’t really explain anything with all those words. Or what they actually found in Claude’s code.
Symbolic-AI in itself isn’t too hard. That’s stuff from the 1980s and in every computer science textbook. Just no clue how something like an expert system is supposed to be connected to a Chatbot or programming agent.
So there are no open source neuro-symbolic models that you are aware of?
https://lmddgtfy.net/?q=open+source+neuro-symbolic+ai
First search result
Thanks! Have you ever used this? I’m also seeing another logic language called Scallop.
I’ve recently started running models locally with llama.cpp, but this seems like a whole other setup.
There are a lot of links on the github page to the projects doc, setup, demo, discussions etc.