Mistral AI in Amazon Bedrock

Break new ground with powerful foundation models from Mistral AI

Benefits

Mistral AI models are transparent and customizable, appealing to enterprises that have compliance and regulatory requirements. These models are available as white-box solutions, making both weights and code sources available.
Mistral AI models are transparent and customizable. The models are available under the Apache 2.0 license, appealing to enterprises that have compliance and regulatory requirements.
Mistral AI models have an impressive inference speed and are optimized for low latency. The models also have a low memory requirement and have high throughput for their respective sizes (7B, 8x7B).
Mistral AI models strike a remarkable balance between cost-effectiveness and performance. The use of sparse mixture of experts (MoE) makes Mistral AI’s LLMs efficient, affordable, and scalable while controlling compute costs.
Drive insights for your business by quickly and easily fine-tuning models with your custom data to address specific tasks and achieve compelling performance.

Meet Mistral AI

Mistral AI is on a mission to push AI forward. It’s cutting-edge models reflect the company's ambition to become the leading supporter of the generative AI community, and elevate publicly available models to state-of-the-art performance.

Use cases

Mistral AI models extract the essence from lengthy articles so you quickly grasp key ideas and core messaging.

Mistral AI models deeply understand the underlying structure and architecture of text, organize information within text, and help focus attention on key concepts and relationships.

The core AI capabilities of understanding language, reasoning, and learning allow Mistral AI models to handle question answering with more human-like performance. The accuracy, explanation abilities, and versatility of Mistral AI models make them very useful for automating and scaling knowledge sharing.

Mistral AI models have an exceptional understanding of natural language and code-related tasks, which is essential for projects that need to juggle computer code and regular language. Mistral AI models can help generate code snippets, suggest bug fixes, and optimize existing code, speeding up your development process.

Model versions

Mistral Large 2 (24.07)

The latest version of Mistral AI's flagship large language model, with significant improvements on multilingual accuracy, conversational behavior, coding capabilities, reasoning and instruction-following behavior.

Max tokens: 128K

Languages: Dozens of languages supported, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic and Hindi

Fine-tuning supported: No

Supported use cases: multilingual translation, text summarization, complex multilingual reasoning tasks, math and coding tasks including code generation

Read the blog

Mistral Large (24.02)

Mistral Large is a cutting-edge text generation model with top-tier reasoning capabilities. Its precise instruction-following abilities enables application development and tech stack modernization at scale.

Max tokens: 32K

Languages: Natively fluent in English, French, Spanish, German, and Italian

Fine-tuning supported: No

Supported use cases: precise instruction following, text summarization, translation, complex multilingual reasoning tasks, math and coding tasks including code generation

Read the blog

Mistral Small (24.02)

Mistral Small is a highly efficient large language model optimized for high-volume, low-latency language-based tasks. It provides outstanding performance at a cost-effective price point. Key features of Mistral Small include RAG specialization, coding proficiency, and multilingual capabilities.

Max tokens: 32K

Languages: English, French, German, Spanish, Italian

Fine-tuning supported: No

Supported use cases: Optimized for straightforward tasks that can be performed in bulk, such as classification, customer support, or text generation

Read the blog

 

Mixtral 8x7B

A 7B sparse Mixture-of-Experts model with stronger capabilities than Mistral AI
7B. Uses 12B active parameters out of 45B total.

Max tokens: 32K

Languages: English, French, German, Spanish, Italian

Fine-tuning supported: No

Supported use cases: Text summarization, structuration, question answering,
and code completion

Read the blog

Mistral 7B

A 7B dense Transformer, fast-deployed and easily customizable. Small, yet
powerful for a variety of use cases.

Max tokens: 32K

Languages: English

Fine-tuning supported: No

Supported use cases: Text summarization, structuration, question answering,
and code completion

Read the blog

Customers

  • Zalando

    Zalando is building the leading pan-European ecosystem for fashion and lifestyle e-commerce. They offer an inspiring and quality multi-brand shopping experience for fashion and lifestyle products to about 50 million active customers in 25 markets.

    At Zalando, accessing Mistral AI models in Amazon Bedrock has been a game-changer for our multinational operations across Europe. Mistral Large provides exceptional support and native fluency for European languages that empowers our diverse workforce to communicate seamlessly, fostering collaboration and inclusivity by using the models to draft emails in German, French, and other languages. Personally, the model’s German language capabilities have greatly aided my own language learning journey. Mistral's multilingual accuracy and nuanced understanding of grammar and cultural context paired with Amazon Bedrock's secure and serverless single API allows us to deliver exceptional service to our customers across the region.

    Samay Kapadia, ML Engineering Manager, Zalando
  • BigDataCorp

    AWS has enabled us to realize our vision and bring a robust product to market in less than three weeks, at a much lower cost than any comparable generative AI solutions. Amazon Bedrock allows us to focus on developing the business solutions our clients need, instead of having to worry about the challenge of downloading and running complex AI models at scale ourselves. Through our use of Amazon Bedrock with Mistral 7B, we have been able to achieve a higher ROI in a shorter period of time.

    Thoran Rodrigues, CEO & Founder, BigDataCorp
  • Superinsight

    Superinsight is a fast-growing legal tech startup in the U.S. that leverages generative AI to assist attorneys with disability claims, including Social Security, veterans' disability, personal injury, and workers' compensation. For decades, the overwhelming demand has far exceeded the number of available attorneys. Superinsight dramatically boosts productivity, allowing lawyers to handle more cases and ensuring that individuals with disabilities receive the benefits they deserve.

    At Superinsight, we process thousands of medical records for individuals in need. By leveraging the Mistral 7B model in Amazon Bedrock, we can efficiently extract data and uncover insights 10x faster, 25x cheaper, and identify 3x more insights that humans often miss, which is crucial in our business since any overlooked evidence can impact the outcome of each case. This powerful combination allows us to deliver superior medical reviews to our attorneys, enabling them to assist more people with disabilities.

    Nelson Chu, CEO, Superinsight