Quick post about a change I made that’s worked out well.
I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.
Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.
For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.
Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.
Free bullshit generator
What’s the model name to pull?
Any quality difference?
Depending what OP was using before but going from something like GPT5.2 to LLama 3 8B will be a massive difference (Although OP says to use it only for basic tasks so that does offset it)
LLama 3 already being a very old model doesn’t help either
I run Qwen3.5-35B-A3B-AWQ-4bit which while leagues ahead of LLama 3 8B still is a very noticeable difference.
This is not to say open source is bad, if one had the resources to run something like Qwen3.5-397B-A17B it would also be up there.
I only ever use my local ai for home assistant voice assistant on my phone, but it’s more of a gimmick/party trick since I only have temperatures sensors currently (only got into ha recently) and it can’t access WiFi so it’s just quietly sitting unloaded on my truenas server
deleted by creator



