Hey all! I want to start testing neuro-symbolic AI vs. LLM’s and want to know how to get into this. As I understand it, Claude Code, does this, but are there ways to use it locally?
How does it work under the hood? I know LLM’s involve tokens, embeddings, weights and transformers. How does the symbolic part of it change it?
Thanks!


Haven’t heard that term before, but one of the things that’s been obvious to me to experiment with – not that I’ve actually gotten around to it yet – is to have an LLM try using Prolog, Z3, and/or SQL as tools to overcome some of its weak points. That’s naively what I’d expect “neuro-symbolic AI” techniques to be (assuming you’re looking at current topics rather than stuff from ~20 years ago), but again, shot in the dark here.
Supposed to be combining neural networks (LLM) with symbolic AI so I guess instead of just analyzing tokens it’s also analyzing symbols and rules.