AICustomer's logo

Accelerating Developer Adoption: How Mistral AI Scaled to Millions of SDK Downloads with Speakeasy

Mistral is on a mission to provide tailor-made AI to the world's builders. And Speakeasy is helping them do it with type-safe SDKs that make it easy to integrate AI into your application.

“We are very happy with Speakeasy’s support leading up to the launch of our v1 client. Internally, our developers find the SDK useful, it’s actively used, and continues to generate valuable feedback. The Speakeasy team has been instrumental throughout our implementation journey.”

Avatar

Gaspard Blanchet, Mistral AI

Overview: Offering Always-in-sync SDKs With Minimal Eng Work

Mistral AI, a pioneering French foundational AI model company, provides open-source models and AI solutions through a comprehensive API offering.

Their platform supports text generation with streaming capabilities, chat completions, embeddings generation, and specialized services like OCR and moderation (Mistral AI API Docs (opens in a new tab)).

In the fast-paced generative AI landscape, providing millions of developers immediate access to the latest models and features via consistent, reliable SDKs in users’ most popular languages is a key competitive differentiator.

This case study explains how Mistral AI automated their SDK generation process using Speakeasy to maintain consistent, high-quality client libraries across multiple deployment environments, freeing their team to focus on core AI innovation.

Technical Context

Before automating their SDK generation, Mistral AI’s API presented several implementation challenges for SDK development:

  • Complex API structure: Their completion and chat APIs featured nested JSON with conditional fields and streaming responses, pushing the limits of standard OpenAPI representations.
  • Multiple authentication schemes: Services running on their own infrastructure as well as on GCP and Azure—each with different authentication requirements and subtle API differences
  • Rapid feature evolution: New capabilities, like structured outputs, needed to be consistently and quickly available across all client libraries.

Challenges

Developer Experience Challenges

Before implementing Speakeasy, Mistral AI relied on manually written clients. This manual process struggled to keep pace with rapid API development, leading to several problems for developers using the SDKs:

  • Feature gap: SDKs often lagged behind the API capabilities, with developers waiting for new features or having to work around missing functionality.
  • Inconsistent implementations: Features might appear (or behave differently) in one language SDK before others.
  • Documentation drift: Keeping API documentation, SDK documentation, and SDK implementations synchronized during rapid development cycles was a constant struggle.

Technical Implementation Challenges

The engineering team faced significant technical hurdles maintaining these manual SDKs:

  • Representing Complex APIs: Accurately representing the complex nested JSON structures, especially for streaming responses, within OpenAPI specifications was difficult. Example structure of a chat completion request:
request.json
{
"model": "mistral-large-latest",
"messages": [
{
"role": "user",
"content": "What is the best French painter?"
}
],
"temperature": 0.7,
"max_tokens": 250,
"stream": true
}
  • Multi-Environment Support: Managing the distinct authentication logic and potential subtle API differences across GCP, Azure, and on-premise environments within each SDK was cumbersome.

  • SDK Consistency: Ensuring feature parity, consistent behavior, and idiomatic usage across both Python and TypeScript implementations required significant manual effort and testing.

Solution: Automated SDK Generation with Speakeasy

Mistral AI adopted Speakeasy’s SDK generation platform to automate the process and address these challenges comprehensively.

Multi-Source Specification Management

To handle their different deployment targets and authentication schemes, the Mistral AI team designed a sophisticated workflow leveraging Speakeasy’s ability to manage OpenAPI specifications.

They used multiple specification sources and applied overlays and transformations to tailor the final specification for each target environment (e.g., adding cloud-specific authentication details or Azure-specific modifications).

This approach allowed them to maintain a single source of truth for their core API logic while automatically generating tailored specifications and SDKs for their complex deployment scenarios, replacing tedious manual SDK coding with an automated pipeline.

Cross-Platform Support

Speakeasy enabled Mistral AI to automatically generate and maintain consistent SDKs across their diverse deployment environments, ensuring developers have a reliable experience regardless of how they access the Mistral AI platform:

Environment / FeaturePython SDKTypeScript SDKInternal SDK Variants
Cloud Platforms (e.g. Azure, GCP)
Self-deployment
Consistent API Feature Coverage✓+

(Links: Mistral’s Python SDK (opens in a new tab), Mistral’s TypeScript SDK (opens in a new tab))

This automation ensures that whether developers interact with Mistral AI via a managed cloud instance or a self-deployed environment, they benefit from SDKs generated from the same verified OpenAPI source, including necessary configurations (like specific authentication methods) handled during the generation process. The platform provided automated generation for both public-facing SDKs and enhanced internal variants with additional capabilities.

From Manual to Automated: Collaborative Engineering

The transition from manual SDK creation to an automated workflow involved close collaboration between Mistral AI and Speakeasy.

“It was a learning curve for our organization to move from an artisanal process to a more fully automated one. But we are happy where we are now because we have a better understanding of what we need to do in the spec to get what we want after the generation.”

Avatar

Gaspard Blanchet, Mistral AI

This partnership allowed Mistral AI to leverage Speakeasy’s expertise and customization capabilities to accurately model their complex API and authentication requirements.

Before Speakeasy: Based on their earlier client versions, developers had to manually construct request bodies, handle optional parameters explicitly, implement distinct logic for streaming versus non-streaming responses, and manage HTTP requests and error handling directly. This led to more verbose and potentially error-prone code requiring significant maintenance.

manual_client.py
# Excerpt from the 'chat' method in the previous manual client
# Demonstrates manual handling of parameters, request body, and streaming logic
def chat(
self,
model: str,
messages: List[ChatMessage],
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
top_p: Optional[float] = None,
random_seed: Optional[int] = None,
stream: Optional[bool] = False, # Note: Stream parameter handled manually
safe_mode: Optional[bool] = False,
# ... other parameters
) -> Union[ChatCompletionResponse, Iterator[ChatCompletionStreamResponse]]:
# Manually construct the request data dictionary
data = {
"model": model,
"messages": [message.model_dump() for message in messages],
"safe_mode": safe_mode,
}
# Manually add optional parameters if they are provided
if temperature is not None:
data["temperature"] = temperature
if max_tokens is not None:
data["max_tokens"] = max_tokens
if top_p is not None:
data["top_p"] = top_p
if random_seed is not None:
data["random_seed"] = random_seed
# ... manual checks for other optional parameters
# Manual branching logic based on the 'stream' parameter
if stream:
data["stream"] = True
# Manually call internal request method configured for streaming
response = self._request(
"post", self._resolve_url("/chat/completions"), json=data, stream=True
)
# Manually process the streaming response via a separate generator
return self._process_chat_stream_response(response)
else:
# Manually call internal request method for non-streaming
response = self._request("post", self._resolve_url("/chat/completions"), json=data)
# Manually parse the JSON and instantiate the response object
return ChatCompletionResponse.model_validate_json(response.content)
# Note: The internal '_request' method (not shown) would contain further manual logic
# for handling HTTP calls, authentication headers, and error status codes.

This manual approach required developers to carefully manage numerous optional fields, different response types depending on parameters like stream, and the underlying HTTP interactions for each API endpoint.

After Speakeasy: The generated code provides clean, idiomatic interfaces with automatic type handling, validation, proper resource management (like context managers in Python), and abstracts away the underlying HTTP complexity.

sdk_client.py
# Generated method with automatic type handling and context management
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
import os
api_key = os.environ.get("MISTRAL_API_KEY")
model = "mistral-large-latest"
client = MistralClient(api_key=api_key)
chat_response = client.chat(
model=model,
messages=[
ChatMessage(role="user", content="What is the best French cheese?")
]
# Optional parameters like temperature, max_tokens handled automatically
)
print(chat_response.choices[0].message.content)
# Streaming example handled cleanly by a separate generated method
# stream_response = client.chat_stream(...)
# for chunk in stream_response: ...

This automated approach enabled Mistral AI to provide a polished, consistent experience for developers, significantly reducing boilerplate and potential integration errors.

Key Results

Mistral AI’s implementation of Speakeasy has yielded impressive technical and business outcomes:

Engineering Efficiency

  • SDKs automatically update when API changes occur.
  • Reduced maintenance overhead, freeing up core engineers to focus on AI model development and platform features.
  • Significant productivity boost for internal SDK consumers e.g. front-end team

Feature Velocity & Quality

  • Rapid feature rollout: New API capabilities, like structured outputs, were implemented consistently across SDKs in days, compared to a multi-week timeline previously.
  • Complete API coverage, ensuring all public endpoints and features are consistently available across supported SDKs.
  • Improved internal practices: Increased usage of SDKs by internal teams, with Speakeasy’s validation helping enforce OpenAPI spec quality and ensuring consistent validation and type-checking across their ecosystem.

Implementation Journey

Mistral AI’s journey to fully automated SDK generation followed these key phases:

  1. Specification Refinement: Collaborating with Speakeasy to ensure their OpenAPI specifications accurately represented the complex API structure, including streaming and authentication details.
  2. Customization & Transformation: Developing necessary transformations (using Speakeasy’s customization features) to handle environment-specific logic like authentication.
  3. Validation & Testing: Rigorous testing of the generated SDKs across different languages and deployment environments.

What’s Next

Mistral AI continues to leverage and expand its Speakeasy implementation:

  • Automated Test Generation: Implementing Speakeasy’s test generation features for comprehensive SDK testing.
  • CI/CD Integration: Integrating Speakeasy’s SDK generation into their existing CI/CD pipeline for fully automated builds and releases upon API updates.
  • Generated Code Snippets: Adding Speakeasy-generated code examples directly into their API documentation to further improve developer onboarding.
  • New Model Support: Upcoming models and services, like their advanced OCR capabilities, will utilize Speakeasy-generated SDKs from day one, demonstrating continued confidence in the platform.

As Mistral AI expands its offerings with models like Mistral Large, Pixtral, and specialized services, Speakeasy provides the scalable foundation for maintaining a world-class developer experience across their entire API ecosystem.

Explore the Speakeasy-generated SDKs and the Mistral AI API documentation:

Customer Showcase

We are trusted by companies of all sizes.

Maintaining 4 SDKs improves API integrations for internal and external developersSecurity

Live SDKS

Maintaining 4 SDKs improves API integrations for internal and external developers

How ConductorOne saved 650+ upfront Eng hours while raising the bar on DevEx Security

Live SDKS

How ConductorOne saved 650+ upfront Eng hours while raising the bar on DevEx

See all customers