Mistral Large

Cheaper GPT-4 rival that supports 32K-token context windows

2024-02-27

Mistral Large
Mistral Large has top-tier reasoning capacities, is multi-lingual by design, has native function calling capacities and a 32k model. The pre-trained model has 81.2% accuracy on MMLU.
Mistral Large is a cutting-edge language model designed to rival GPT-4, offering top-tier reasoning, multilingual fluency, and a 32K-token context window for precise information recall. With native function calling and constrained output mode, it excels in complex tasks like text understanding, code generation, and application development. Achieving 81.2% accuracy on MMLU benchmarks, it stands as the second-ranked model globally, available via APIs on Mistral’s la Plateforme and Microsoft Azure. Its multilingual capabilities, spanning English, French, Spanish, German, and Italian, outperform competitors like LLaMA 2 70B. Additionally, Mistral Small, a latency-optimized model, complements Mistral Large for cost-sensitive workloads. Together, they provide versatile solutions for developers, with JSON formatting and function calling enhancing integration and scalability.
Open Source Developer Tools Artificial Intelligence GitHub