That’s amazing! I happen to use both waterfox and tree style tabs already, like a dream cone true.
That’s amazing! I happen to use both waterfox and tree style tabs already, like a dream cone true.
Interesting that it only works on windows, a lot of AI projects I’ve seen have been the other way around.
I love seeing these truly open projects, rather than the typical “hey here are some weights + weird license”.
Great to see you back!
Banning either is fascist, although I assume/hope you were joking.
A while who I found a model called Carl 33b. I can’t remember all the specifics but I think it was a therapist AI designed specifically for stress and I think it was based on llama 1.
If you have a high end GPU, or lots of RAM you can run some good quality LLMs offline. I recommend watching Matthew Berman for tutorials (there are some showing paid hosting aswell).
glhf for the rest of the year! Thanks for all your work this year and see you in the next.
Here are some ideas:
Comparisons/reviews of different AI models. (Example: llama2 is better than llama1 because x)
Tutorials for how to apply AI (Example: Making a song shuffling system with an llm and music metadata)
I’ve seen you doing a lot of stuff on here, so I also wanted to say: Don’t overwork yourself, and I appreciate the effort you already put in.
Really unfortunate that there are people who would protest open source. I need to read some wholesome cat stories or something to restore my faith in humanity.
Knowledge level: Enthusiastic spectator, I don’t make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.
Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?
Context: I’m able to run either model locally, but I can’t run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don’t know if this is true I just head l heard it somewhere).
Are the llama2 models Apache 2.0 compatible? I think they use a custom license with some restrictions, could be totally wrong though.
Anything based on llama2 tbh. It’s fast enough and logical enough to handle the kinds of programming related tasks I want to use a llm for (writing boilerplate code, generating placeholder data, simple refactoring). With the release of the vicuna and codellama models things are getting even better.
it’s a cool thing you’ve made, but where’s the joke?
Even if you don’t normally look at linked sites, you gotta read this one.
I like a lot of the stuff I already see on here so I would probably pick #1. Additionally about #4 I really wouldn’t want to see stories, or other AI generated content here, I’d rather see things like “AI is being used to create stories in a new way” or “This game utilizes AI” rather than the actual stories or games themselves.
Not an answer to your question, but have you checked out Bedrock Linux as opposed to installing multiple distros? Or maybe using virtual machines?
I can’t see anything about inline completions, which (at least to me) is the main point of copilot. Better integration of local LLMs into vscode will be nice though.
Take a look at this extension for wizard coder
ten downvotes?