Live SDKS
Maintaining 4 SDKs improves API integrations for internal and external developers
Mistral is on a mission to provide tailor-made AI to the world's builders. And Speakeasy is helping them do it with type-safe SDKs that make it easy to integrate AI into your application.
“We are very happy with Speakeasy’s support leading up to the launch of our v1 client. Internally, our developers find the SDK useful, it’s actively used, and continues to generate valuable feedback. The Speakeasy team has been instrumental throughout our implementation journey.”
Gaspard Blanchet, Mistral AI
Mistral AI, a pioneering French foundational AI model company, provides open-source models and AI solutions through a comprehensive API offering.
Their platform supports text generation with streaming capabilities, chat completions, embeddings generation, and specialized services like OCR and moderation (Mistral AI API Docs (opens in a new tab)).
In the fast-paced generative AI landscape, providing millions of developers immediate access to the latest models and features via consistent, reliable SDKs in users’ most popular languages is a key competitive differentiator.
This case study explains how Mistral AI automated their SDK generation process using Speakeasy to maintain consistent, high-quality client libraries across multiple deployment environments, freeing their team to focus on core AI innovation.
Before automating their SDK generation, Mistral AI’s API presented several implementation challenges for SDK development:
Before implementing Speakeasy, Mistral AI relied on manually written clients. This manual process struggled to keep pace with rapid API development, leading to several problems for developers using the SDKs:
The engineering team faced significant technical hurdles maintaining these manual SDKs:
{"model": "mistral-large-latest","messages": [{"role": "user","content": "What is the best French painter?"}],"temperature": 0.7,"max_tokens": 250,"stream": true}
Multi-Environment Support: Managing the distinct authentication logic and potential subtle API differences across GCP, Azure, and on-premise environments within each SDK was cumbersome.
SDK Consistency: Ensuring feature parity, consistent behavior, and idiomatic usage across both Python and TypeScript implementations required significant manual effort and testing.
Mistral AI adopted Speakeasy’s SDK generation platform to automate the process and address these challenges comprehensively.
To handle their different deployment targets and authentication schemes, the Mistral AI team designed a sophisticated workflow leveraging Speakeasy’s ability to manage OpenAPI specifications.
They used multiple specification sources and applied overlays and transformations to tailor the final specification for each target environment (e.g., adding cloud-specific authentication details or Azure-specific modifications).
This approach allowed them to maintain a single source of truth for their core API logic while automatically generating tailored specifications and SDKs for their complex deployment scenarios, replacing tedious manual SDK coding with an automated pipeline.
Speakeasy enabled Mistral AI to automatically generate and maintain consistent SDKs across their diverse deployment environments, ensuring developers have a reliable experience regardless of how they access the Mistral AI platform:
Environment / Feature | Python SDK | TypeScript SDK | Internal SDK Variants |
---|---|---|---|
Cloud Platforms (e.g. Azure, GCP) | ✓ | ✓ | ✓ |
Self-deployment | ✓ | ✓ | ✓ |
Consistent API Feature Coverage | ✓ | ✓ | ✓+ |
(Links: Mistral’s Python SDK (opens in a new tab), Mistral’s TypeScript SDK (opens in a new tab))
This automation ensures that whether developers interact with Mistral AI via a managed cloud instance or a self-deployed environment, they benefit from SDKs generated from the same verified OpenAPI source, including necessary configurations (like specific authentication methods) handled during the generation process. The platform provided automated generation for both public-facing SDKs and enhanced internal variants with additional capabilities.
The transition from manual SDK creation to an automated workflow involved close collaboration between Mistral AI and Speakeasy.
“It was a learning curve for our organization to move from an artisanal process to a more fully automated one. But we are happy where we are now because we have a better understanding of what we need to do in the spec to get what we want after the generation.”
Gaspard Blanchet, Mistral AI
This partnership allowed Mistral AI to leverage Speakeasy’s expertise and customization capabilities to accurately model their complex API and authentication requirements.
Before Speakeasy: Based on their earlier client versions, developers had to manually construct request bodies, handle optional parameters explicitly, implement distinct logic for streaming versus non-streaming responses, and manage HTTP requests and error handling directly. This led to more verbose and potentially error-prone code requiring significant maintenance.
# Excerpt from the 'chat' method in the previous manual client# Demonstrates manual handling of parameters, request body, and streaming logicdef chat(self,model: str,messages: List[ChatMessage],temperature: Optional[float] = None,max_tokens: Optional[int] = None,top_p: Optional[float] = None,random_seed: Optional[int] = None,stream: Optional[bool] = False, # Note: Stream parameter handled manuallysafe_mode: Optional[bool] = False,# ... other parameters) -> Union[ChatCompletionResponse, Iterator[ChatCompletionStreamResponse]]:# Manually construct the request data dictionarydata = {"model": model,"messages": [message.model_dump() for message in messages],"safe_mode": safe_mode,}# Manually add optional parameters if they are providedif temperature is not None:data["temperature"] = temperatureif max_tokens is not None:data["max_tokens"] = max_tokensif top_p is not None:data["top_p"] = top_pif random_seed is not None:data["random_seed"] = random_seed# ... manual checks for other optional parameters# Manual branching logic based on the 'stream' parameterif stream:data["stream"] = True# Manually call internal request method configured for streamingresponse = self._request("post", self._resolve_url("/chat/completions"), json=data, stream=True)# Manually process the streaming response via a separate generatorreturn self._process_chat_stream_response(response)else:# Manually call internal request method for non-streamingresponse = self._request("post", self._resolve_url("/chat/completions"), json=data)# Manually parse the JSON and instantiate the response objectreturn ChatCompletionResponse.model_validate_json(response.content)# Note: The internal '_request' method (not shown) would contain further manual logic# for handling HTTP calls, authentication headers, and error status codes.
This manual approach required developers to carefully manage numerous optional fields, different response types depending on parameters like stream
, and the underlying HTTP interactions for each API endpoint.
After Speakeasy: The generated code provides clean, idiomatic interfaces with automatic type handling, validation, proper resource management (like context managers in Python), and abstracts away the underlying HTTP complexity.
# Generated method with automatic type handling and context managementfrom mistralai.client import MistralClientfrom mistralai.models.chat_completion import ChatMessageimport osapi_key = os.environ.get("MISTRAL_API_KEY")model = "mistral-large-latest"client = MistralClient(api_key=api_key)chat_response = client.chat(model=model,messages=[ChatMessage(role="user", content="What is the best French cheese?")]# Optional parameters like temperature, max_tokens handled automatically)print(chat_response.choices[0].message.content)# Streaming example handled cleanly by a separate generated method# stream_response = client.chat_stream(...)# for chunk in stream_response: ...
This automated approach enabled Mistral AI to provide a polished, consistent experience for developers, significantly reducing boilerplate and potential integration errors.
Mistral AI’s implementation of Speakeasy has yielded impressive technical and business outcomes:
Mistral AI’s journey to fully automated SDK generation followed these key phases:
Mistral AI continues to leverage and expand its Speakeasy implementation:
As Mistral AI expands its offerings with models like Mistral Large, Pixtral, and specialized services, Speakeasy provides the scalable foundation for maintaining a world-class developer experience across their entire API ecosystem.
Explore the Speakeasy-generated SDKs and the Mistral AI API documentation: