Approach hardwires model weights into transistors, and uses older 6nm process. Targetting 70b model sizes (presumably 16 bit) by year end. It should cost much less than a 140gb card. but I don’t know details.
You must log in or register to comment.
Oh cool, a whole new e-waste industry. Anyone want this old gpt4.1 chip? I know the latest is GPT-8 and the whole ecosystem has largely moved on in a way that renders most software incompatible, but hey, it’s right here on this PCI-E card so you can’t stick it in a Raspberry Pi either!
No? Guess I’ll chuck it in the landfill!


