LLMs for coding has improved dramatically over the past year or so. But, I find that its quality varies greatly, depending on the model. I find models like Gemini and GPT to be too overconfident, and it doesn’t communicate well enough. Claude knows when to stop and evaluate the situation for options. I’ve had mixed results with the local models, but I’m still adjusting quantization settings to make it work best with my VRAM.
You still need the skills to understand programming and design engineering, and you frankly need the personality to be meticulous with your reviews, but it’s really nice having something that can code 3-8x faster than what I was doing before.
LLMs for coding has improved dramatically over the past year or so. But, I find that its quality varies greatly, depending on the model. I find models like Gemini and GPT to be too overconfident, and it doesn’t communicate well enough. Claude knows when to stop and evaluate the situation for options. I’ve had mixed results with the local models, but I’m still adjusting quantization settings to make it work best with my VRAM.
You still need the skills to understand programming and design engineering, and you frankly need the personality to be meticulous with your reviews, but it’s really nice having something that can code 3-8x faster than what I was doing before.
PrimeTime had a good recent video about a senior programmer’s experience with fixing a very hard to find bug.