Key architectural details

Mixture of Experts (MoE): 128 experts, with 4 active per token, enabling efficient scaling and specialization.

119B total parameters, with 6B active parameters per token (8B including embedding and output layers).

256k context window, supporting long-form interactions and document analysis.

Configurable reasoning effort: Toggle between fast, low-latency responses and deep, reasoning-intensive outputs.

Native multimodality: Accepts both text and image inputs, unlocking use cases from document parsing to visual analysis.

  • TheFrirish@tarte.nuage-libre.fr
    link
    fedilink
    Français
    arrow-up
    2
    ·
    edit-2
    4 days ago

    I have an 7900XTX, Ryzen 9 7950X3D with 96GB ram which I humbly believe is already way above 95% people’s setup

    I don’t think I can run this, not with ollama that’s for sure