Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions elasticsearch/_async/client/snapshot.py
Original file line number Diff line number Diff line change
Expand Up @@ -802,6 +802,10 @@ async def repository_analyze(
This allows you to demonstrate to your storage supplier that a repository analysis failure must only be caused by an incompatibility with AWS S3 and cannot be attributed to a problem in Elasticsearch.
Please do not report Elasticsearch issues involving third-party storage systems unless you can demonstrate that the same issue exists when analysing a repository that uses the reference implementation of the same storage protocol.
You will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects.</p>
<p>The analysis may also report a failure if your repository experienced a service disruption while the analysis was running.
In practice, occasional service disruptions are inevitable, but the analysis cannot itself distinguish such disruptions from incorrect behavior so must report all deviations from the expected behavior as failures.
If you are certain that you can ascribe an analysis failure to such a service disruption, wait for your service provider to resolve the disruption and then re-run the analysis.
Elasticsearch will be unable to create or restore snapshots during repository service disruptions, so you must ensure that these events occur only very rarely.</p>
<p>If the analysis is successful, the API returns details of the testing process, optionally including how long each operation took.
You can use this information to determine the performance of your storage system.
If any operation fails or returns an incorrect result, the API returns an error.
Expand Down
4 changes: 4 additions & 0 deletions elasticsearch/_sync/client/snapshot.py
Original file line number Diff line number Diff line change
Expand Up @@ -802,6 +802,10 @@ def repository_analyze(
This allows you to demonstrate to your storage supplier that a repository analysis failure must only be caused by an incompatibility with AWS S3 and cannot be attributed to a problem in Elasticsearch.
Please do not report Elasticsearch issues involving third-party storage systems unless you can demonstrate that the same issue exists when analysing a repository that uses the reference implementation of the same storage protocol.
You will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects.</p>
<p>The analysis may also report a failure if your repository experienced a service disruption while the analysis was running.
In practice, occasional service disruptions are inevitable, but the analysis cannot itself distinguish such disruptions from incorrect behavior so must report all deviations from the expected behavior as failures.
If you are certain that you can ascribe an analysis failure to such a service disruption, wait for your service provider to resolve the disruption and then re-run the analysis.
Elasticsearch will be unable to create or restore snapshots during repository service disruptions, so you must ensure that these events occur only very rarely.</p>
<p>If the analysis is successful, the API returns details of the testing process, optionally including how long each operation took.
You can use this information to determine the performance of your storage system.
If any operation fails or returns an incorrect result, the API returns an error.
Expand Down
2 changes: 1 addition & 1 deletion elasticsearch/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@
# under the License.

__versionstr__ = "9.4.0"
__es_specification_commit__ = "9926f2cb48e44e4e3540fbe48303dbfef72d8bd7"
__es_specification_commit__ = "fcf537e4be958d56e9c7cafe9076afdc8a91ffc1"
_SERVERLESS_API_VERSION = "2023-10-31"
2 changes: 2 additions & 0 deletions elasticsearch/dsl/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -656,6 +656,7 @@ class FieldSort(AttrDict[Any]):
"keyword",
"text",
"search_as_you_type",
"wildcard",
"date",
"date_nanos",
"boolean",
Expand Down Expand Up @@ -721,6 +722,7 @@ def __init__(
"keyword",
"text",
"search_as_you_type",
"wildcard",
"date",
"date_nanos",
"boolean",
Expand Down
2 changes: 1 addition & 1 deletion elasticsearch/serializer.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@

__all__.append("PyArrowSerializer")
except ImportError:
pa = None
pa = None # type: ignore[assignment]


class JsonSerializer(_JsonSerializer):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1065,14 +1065,9 @@ async def test_metadata_mapping(
"dims": 10,
"index": True,
"index_options": {
"bits": 4,
"cluster_size": 384,
"default_visit_percentage": 0.0,
"flat_index_threshold": -1,
"rescore_vector": {
"oversample": 3.0,
},
"type": "bbq_disk",
"ef_construction": 100,
"m": 16,
"type": "int8_hnsw",
},
"similarity": "cosine",
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1045,14 +1045,9 @@ def test_metadata_mapping(self, sync_client: Elasticsearch, index: str) -> None:
"dims": 10,
"index": True,
"index_options": {
"bits": 4,
"cluster_size": 384,
"default_visit_percentage": 0.0,
"flat_index_threshold": -1,
"rescore_vector": {
"oversample": 3.0,
},
"type": "bbq_disk",
"ef_construction": 100,
"m": 16,
"type": "int8_hnsw",
},
"similarity": "cosine",
}
Expand Down
Loading