Skip to content

Commit 2611c82

Browse files
github-actions[bot]speakeasybotamenasria
authored
chore: 🐝 Update SDK - Generate MISTRALAI MISTRALAI-SDK 1.4.0 (#183)
* ci: regenerated with OpenAPI Doc , Speakeasy CLI 1.469.11 * Add chat_prediction example * Git ignore .vscode/ --------- Co-authored-by: speakeasybot <[email protected]> Co-authored-by: Alexandre Menasria <[email protected]>
1 parent d482b3c commit 2611c82

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+927
-478
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
.vscode/
12
.speakeasy/reports
23
README-PYPI.md
34
.venv/

.speakeasy/gen.lock

+102-86
Large diffs are not rendered by default.

.speakeasy/gen.yaml

+2-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ generation:
1313
oAuth2ClientCredentialsEnabled: true
1414
oAuth2PasswordEnabled: false
1515
python:
16-
version: 1.3.1
16+
version: 1.4.0
1717
additionalDependencies:
1818
dev:
1919
pytest: ^8.2.2
@@ -23,6 +23,7 @@ python:
2323
clientServerStatusCodesAsErrors: true
2424
defaultErrorName: SDKError
2525
description: Python Client SDK for the Mistral AI API.
26+
enableCustomCodeRegions: false
2627
enumFormat: union
2728
envVarPrefix: MISTRAL
2829
fixFlags:

.speakeasy/workflow.lock

+8-8
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
speakeasyVersion: 1.462.2
1+
speakeasyVersion: 1.469.11
22
sources:
33
mistral-azure-source:
44
sourceNamespace: mistral-azure-source
@@ -14,11 +14,11 @@ sources:
1414
- latest
1515
mistral-openapi:
1616
sourceNamespace: mistral-openapi
17-
sourceRevisionDigest: sha256:84bbc6f6011a31e21c8a674b01104446f986c7b5a6b002357800be8ef939b8da
18-
sourceBlobDigest: sha256:ebc7c1bb20aa87873a255cebea1e451099d8949ea1bbff81ec5fd45a107e3a32
17+
sourceRevisionDigest: sha256:c414dd5eecca5f02fe9012a1d131f696e0257fe100c371609272dbc6c522ef07
18+
sourceBlobDigest: sha256:f48af039106d00de84345fd095fbf4831f18fbeeef07e9ff7bba70a0e07eda07
1919
tags:
2020
- latest
21-
- speakeasy-sdk-regen-1736937863
21+
- speakeasy-sdk-regen-1737393201
2222
targets:
2323
mistralai-azure-sdk:
2424
source: mistral-azure-source
@@ -37,13 +37,13 @@ targets:
3737
mistralai-sdk:
3838
source: mistral-openapi
3939
sourceNamespace: mistral-openapi
40-
sourceRevisionDigest: sha256:84bbc6f6011a31e21c8a674b01104446f986c7b5a6b002357800be8ef939b8da
41-
sourceBlobDigest: sha256:ebc7c1bb20aa87873a255cebea1e451099d8949ea1bbff81ec5fd45a107e3a32
40+
sourceRevisionDigest: sha256:c414dd5eecca5f02fe9012a1d131f696e0257fe100c371609272dbc6c522ef07
41+
sourceBlobDigest: sha256:f48af039106d00de84345fd095fbf4831f18fbeeef07e9ff7bba70a0e07eda07
4242
codeSamplesNamespace: mistral-openapi-code-samples
43-
codeSamplesRevisionDigest: sha256:7461afcdcac02dc78b61b234ee4c5e25abbaca9ad6cf5aab415e7c97b5638b49
43+
codeSamplesRevisionDigest: sha256:3f61d33c46733b24ecd422423900425b381529da038992e59bdb5a9b766bdf89
4444
workflow:
4545
workflowVersion: 1.0.0
46-
speakeasyVersion: 1.462.2
46+
speakeasyVersion: latest
4747
sources:
4848
mistral-azure-source:
4949
inputs:

.speakeasy/workflow.yaml

-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,4 @@
11
workflowVersion: 1.0.0
2-
# speakeasyVersion is pinned to unblock https://github.com/mistralai/client-python/pull/173
3-
# The speakeasy run was appending `_` to some attributes to avoid conflicts with reserved keywords.
4-
# This would have change the SDK APIs and break the existing clients which we don't want.
52
speakeasyVersion: latest
63
sources:
74
mistral-azure-source:

README.md

+7-39
Original file line numberDiff line numberDiff line change
@@ -92,9 +92,7 @@ with Mistral(
9292
"content": "Who is the best French painter? Answer in one short sentence.",
9393
"role": "user",
9494
},
95-
])
96-
97-
assert res is not None
95+
], stream=False)
9896

9997
# Handle response
10098
print(res)
@@ -119,9 +117,7 @@ async def main():
119117
"content": "Who is the best French painter? Answer in one short sentence.",
120118
"role": "user",
121119
},
122-
])
123-
124-
assert res is not None
120+
], stream=False)
125121

126122
# Handle response
127123
print(res)
@@ -147,8 +143,6 @@ with Mistral(
147143
"content": open("example.file", "rb"),
148144
})
149145

150-
assert res is not None
151-
152146
# Handle response
153147
print(res)
154148
```
@@ -172,8 +166,6 @@ async def main():
172166
"content": open("example.file", "rb"),
173167
})
174168

175-
assert res is not None
176-
177169
# Handle response
178170
print(res)
179171

@@ -198,9 +190,7 @@ with Mistral(
198190
"content": "Who is the best French painter? Answer in one short sentence.",
199191
"role": "user",
200192
},
201-
], agent_id="<value>")
202-
203-
assert res is not None
193+
], agent_id="<id>", stream=False)
204194

205195
# Handle response
206196
print(res)
@@ -225,9 +215,7 @@ async def main():
225215
"content": "Who is the best French painter? Answer in one short sentence.",
226216
"role": "user",
227217
},
228-
], agent_id="<value>")
229-
230-
assert res is not None
218+
], agent_id="<id>", stream=False)
231219

232220
# Handle response
233221
print(res)
@@ -251,9 +239,7 @@ with Mistral(
251239
res = mistral.embeddings.create(inputs=[
252240
"Embed this sentence.",
253241
"As well as this one.",
254-
], model="Wrangler")
255-
256-
assert res is not None
242+
], model="mistral-embed")
257243

258244
# Handle response
259245
print(res)
@@ -276,9 +262,7 @@ async def main():
276262
res = await mistral.embeddings.create_async(inputs=[
277263
"Embed this sentence.",
278264
"As well as this one.",
279-
], model="Wrangler")
280-
281-
assert res is not None
265+
], model="mistral-embed")
282266

283267
# Handle response
284268
print(res)
@@ -480,9 +464,7 @@ with Mistral(
480464
"content": "Who is the best French painter? Answer in one short sentence.",
481465
"role": "user",
482466
},
483-
])
484-
485-
assert res is not None
467+
], stream=True)
486468

487469
with res as event_stream:
488470
for event in event_stream:
@@ -519,8 +501,6 @@ with Mistral(
519501
"content": open("example.file", "rb"),
520502
})
521503

522-
assert res is not None
523-
524504
# Handle response
525505
print(res)
526506

@@ -545,8 +525,6 @@ with Mistral(
545525
res = mistral.models.list(,
546526
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
547527

548-
assert res is not None
549-
550528
# Handle response
551529
print(res)
552530

@@ -565,8 +543,6 @@ with Mistral(
565543

566544
res = mistral.models.list()
567545

568-
assert res is not None
569-
570546
# Handle response
571547
print(res)
572548

@@ -608,8 +584,6 @@ with Mistral(
608584

609585
res = mistral.models.list()
610586

611-
assert res is not None
612-
613587
# Handle response
614588
print(res)
615589

@@ -646,8 +620,6 @@ with Mistral(
646620

647621
res = mistral.models.list()
648622

649-
assert res is not None
650-
651623
# Handle response
652624
print(res)
653625

@@ -667,8 +639,6 @@ with Mistral(
667639

668640
res = mistral.models.list()
669641

670-
assert res is not None
671-
672642
# Handle response
673643
print(res)
674644

@@ -778,8 +748,6 @@ with Mistral(
778748

779749
res = mistral.models.list()
780750

781-
assert res is not None
782-
783751
# Handle response
784752
print(res)
785753

RELEASES.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -138,4 +138,14 @@ Based on:
138138
### Generated
139139
- [python v1.3.1] .
140140
### Releases
141-
- [PyPI v1.3.1] https://pypi.org/project/mistralai/1.3.1 - .
141+
- [PyPI v1.3.1] https://pypi.org/project/mistralai/1.3.1 - .
142+
143+
## 2025-01-21 11:09:53
144+
### Changes
145+
Based on:
146+
- OpenAPI Doc
147+
- Speakeasy CLI 1.469.11 (2.493.32) https://github.com/speakeasy-api/speakeasy
148+
### Generated
149+
- [python v1.4.0] .
150+
### Releases
151+
- [PyPI v1.4.0] https://pypi.org/project/mistralai/1.4.0 - .

USAGE.md

+6-22
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,7 @@ with Mistral(
1717
"content": "Who is the best French painter? Answer in one short sentence.",
1818
"role": "user",
1919
},
20-
])
21-
22-
assert res is not None
20+
], stream=False)
2321

2422
# Handle response
2523
print(res)
@@ -44,9 +42,7 @@ async def main():
4442
"content": "Who is the best French painter? Answer in one short sentence.",
4543
"role": "user",
4644
},
47-
])
48-
49-
assert res is not None
45+
], stream=False)
5046

5147
# Handle response
5248
print(res)
@@ -72,8 +68,6 @@ with Mistral(
7268
"content": open("example.file", "rb"),
7369
})
7470

75-
assert res is not None
76-
7771
# Handle response
7872
print(res)
7973
```
@@ -97,8 +91,6 @@ async def main():
9791
"content": open("example.file", "rb"),
9892
})
9993

100-
assert res is not None
101-
10294
# Handle response
10395
print(res)
10496

@@ -123,9 +115,7 @@ with Mistral(
123115
"content": "Who is the best French painter? Answer in one short sentence.",
124116
"role": "user",
125117
},
126-
], agent_id="<value>")
127-
128-
assert res is not None
118+
], agent_id="<id>", stream=False)
129119

130120
# Handle response
131121
print(res)
@@ -150,9 +140,7 @@ async def main():
150140
"content": "Who is the best French painter? Answer in one short sentence.",
151141
"role": "user",
152142
},
153-
], agent_id="<value>")
154-
155-
assert res is not None
143+
], agent_id="<id>", stream=False)
156144

157145
# Handle response
158146
print(res)
@@ -176,9 +164,7 @@ with Mistral(
176164
res = mistral.embeddings.create(inputs=[
177165
"Embed this sentence.",
178166
"As well as this one.",
179-
], model="Wrangler")
180-
181-
assert res is not None
167+
], model="mistral-embed")
182168

183169
# Handle response
184170
print(res)
@@ -201,9 +187,7 @@ async def main():
201187
res = await mistral.embeddings.create_async(inputs=[
202188
"Embed this sentence.",
203189
"As well as this one.",
204-
], model="Wrangler")
205-
206-
assert res is not None
190+
], model="mistral-embed")
207191

208192
# Handle response
209193
print(res)

docs/models/agentscompletionrequest.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,5 @@
1616
| `tool_choice` | [Optional[models.AgentsCompletionRequestToolChoice]](../models/agentscompletionrequesttoolchoice.md) | :heavy_minus_sign: | N/A | |
1717
| `presence_penalty` | *Optional[float]* | :heavy_minus_sign: | presence_penalty determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative. | |
1818
| `frequency_penalty` | *Optional[float]* | :heavy_minus_sign: | frequency_penalty penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition. | |
19-
| `n` | *OptionalNullable[int]* | :heavy_minus_sign: | Number of completions to return for each request, input tokens are only billed once. | |
19+
| `n` | *OptionalNullable[int]* | :heavy_minus_sign: | Number of completions to return for each request, input tokens are only billed once. | |
20+
| `prediction` | [Optional[models.Prediction]](../models/prediction.md) | :heavy_minus_sign: | N/A | |

0 commit comments

Comments
 (0)