Mistral just launched their new large open weights model, Mistral Large 3 (675B total, 41B active), alongside a set of three Ministral models (3B, 8B, 14B) Mistral has released Instruct (non-reasoning) variants of all four models, as well as reasoning variants of the three Ministral models. All models support multimodal inputs and are available with an Apache 2.0 license today on @huggingface. We evaluated Mistral Large 3 and the Instruct variants of the three Ministral models prior to launch. Mistral’s highest scoring model in Artificial Analysis Intelligence Index remains the proprietary Magistral Medium 1.2, launched a couple of months back in September - this is due to reasoning giving models a significant advantage in many evals we use. Mistral discloses that a reasoning version of Mistral Large 3 is already in training and we look forward to evaluating it soon! Key highlights: ➤ Large and small models: at 675B total with 41B active, Mistral Large 3 is Mistral’s first open weights mixture-of-experts model since Mixtral 8x7B and 8x22B in late 2023 to early 2024. The Ministral releases are dense with 3B, 8B, and 14B parameter variants ➤ Significant intelligence increase but not amongst leading models (including proprietary): Mistral Large 3 represents a significant upgrade compared to the previous Mistral Large 2 with a +11 point increase on the Intelligence Index up to 38. However, Large 3 still trails leading proprietary reasoning & non-reasoning models ➤ Versatile small models: the Ministral models are released with Base, Instruct, and Reasoning variant weights - we tested only the Instruct variants ahead of release, which achieved Index scores of 31 (14B), 28 (8B), and 22 (3B). This places Ministral 14B ahead of the previous Mistral Small 3.2 with 40% fewer parameters. We are working on evaluating the reasoning variants and will share their intelligence results soon. ➤ Multi-modal capabilities: all models in the release support text and image inputs - this is a significant differentiator for Mistral Large 3, as few open weight models in its size class have support for image input. Context length also increases to 256k, enabling larger-input tasks. These new models from Mistral are not a step change from open weights competition, but they represent a strong performance base with vision capabilities. The Ministral 8B and 14B variants offer particularly compelling performance for their size, and we’re excited to see how the community uses and builds on these models. At launch, the new models are available for serverless inference on @MistralAI and a range of other providers including @awscloud Bedrock, @Azure AI Foundry, @IBMwatsonx, @FireworksAI_HQ, @togethercompute, and @modal.
Mistral Large 3 trails the frontier, but notably is one of the most intelligent open weights multimodal non-reasoning models. Recent models from DeepSeek (v3.2) and Moonshot (Kimi K2) continue to only support text input and output.
Due to their small size, the Ministral releases show a solid intelligence-cost tradeoff, completing the Index evaluations at a substantially lower cost than comparable models such as small models from the Qwen3 family - particularly the VL variants that support image inputs like Ministral.
The Ministral models are especially differentiated for tasks requiring image inputs and a non-reasoning model. All three sizes are a significant upgrade from Google’s Gemma 3 family (previously a go-to option for small multimodal models) and are competitive with Alibaba’s recent Qwen3 VL releases.
Magistral Medium 1.2 remains Mistral’s overall leading model in Artificial Analysis Inteligence Index.
For further analysis of these new models and providers for them as they emerge, see our model pages on Artificial Analysis: Mistral Large 3: Ministral 14B: Ministral 8B:
13,31 N
155
Nội dung trên trang này được cung cấp bởi các bên thứ ba. Trừ khi có quy định khác, OKX không phải là tác giả của bài viết được trích dẫn và không tuyên bố bất kỳ bản quyền nào trong các tài liệu. Nội dung được cung cấp chỉ nhằm mục đích thông tin và không thể hiện quan điểm của OKX. Nội dung này không nhằm chứng thực dưới bất kỳ hình thức nào và không được coi là lời khuyên đầu tư hoặc lời chào mời mua bán tài sản kỹ thuật số. Việc sử dụng AI nhằm cung cấp nội dung tóm tắt hoặc thông tin khác, nội dung do AI tạo ra có thể không chính xác hoặc không nhất quán. Vui lòng đọc bài viết trong liên kết để biết thêm chi tiết và thông tin. OKX không chịu trách nhiệm về nội dung được lưu trữ trên trang web của bên thứ ba. Việc nắm giữ tài sản kỹ thuật số, bao gồm stablecoin và NFT, có độ rủi ro cao và có thể biến động rất lớn. Bạn phải cân nhắc kỹ lưỡng xem việc giao dịch hoặc nắm giữ tài sản kỹ thuật số có phù hợp hay không dựa trên tình hình tài chính của bạn.