

A local LLM not using llama.cpp as the backend? Daring today aren’t we.
Wonder what its performance is in comparison


A local LLM not using llama.cpp as the backend? Daring today aren’t we.
Wonder what its performance is in comparison


Looks pretty neat. Opensource as well. Will give it a try, thanks!


Wild stuff! Thanks for sharing


Holy moly this looks amazing. Cant wait to play around with this


Thats a really weird play on their part… But other that that, the movie looks like it could be good


When this comes to video virtual productions are going to be wild


That was my first thought when I heard about turbo xl. Have gotten no where near that on my 3080. Pushed it to 1 sec.
Wonder what speed they would get with controlnet


New Txt2vid model ? Been toying a little and you get some okay stuff. If the ability to prompt motion comes - that would be the bomb
BTW. How did you upload a video to Lemmy? ?


If this really can do real time synthesis, then it opens up a whole new world of possibility. Thought we had to wait years for this


Just had a quick look at LumaAI discord. 2 min in and already made something I quite like. So cool. Thanks! :)
Would love to see a pixelfed implementation