Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump beam.version from 2.41.0 to 2.52.0 #56

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Dec 11, 2023

Bumps beam.version from 2.41.0 to 2.52.0.
Updates org.apache.beam:beam-sdks-java-core from 2.41.0 to 2.52.0

Release notes

Sourced from org.apache.beam:beam-sdks-java-core's releases.

Beam 2.52.0 release

We are happy to present the new 2.52.0 release of Beam. This release includes both improvements and new functionality. See the download page for this release.

For more information on changes in 2.52.0, check out the detailed release notes.

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621.
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

List of Contributors

According to git shortlog, the following people contributed to the 2.52.0 release. Thank you to all contributors!

... (truncated)

Changelog

Sourced from org.apache.beam:beam-sdks-java-core's changelog.

[2.52.0] - 2023-11-17

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621).
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

Known issues

  • MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).

[2.51.0] - 2023-10-03

New Features / Improvements

... (truncated)

Commits

Updates org.apache.beam:beam-runners-google-cloud-dataflow-java from 2.41.0 to 2.52.0

Release notes

Sourced from org.apache.beam:beam-runners-google-cloud-dataflow-java's releases.

Beam 2.52.0 release

We are happy to present the new 2.52.0 release of Beam. This release includes both improvements and new functionality. See the download page for this release.

For more information on changes in 2.52.0, check out the detailed release notes.

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621.
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

List of Contributors

According to git shortlog, the following people contributed to the 2.52.0 release. Thank you to all contributors!

... (truncated)

Changelog

Sourced from org.apache.beam:beam-runners-google-cloud-dataflow-java's changelog.

[2.52.0] - 2023-11-17

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621).
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

Known issues

  • MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).

[2.51.0] - 2023-10-03

New Features / Improvements

... (truncated)

Commits

Updates org.apache.beam:beam-sdks-java-io-google-cloud-platform from 2.41.0 to 2.52.0

Release notes

Sourced from org.apache.beam:beam-sdks-java-io-google-cloud-platform's releases.

Beam 2.52.0 release

We are happy to present the new 2.52.0 release of Beam. This release includes both improvements and new functionality. See the download page for this release.

For more information on changes in 2.52.0, check out the detailed release notes.

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621.
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

List of Contributors

According to git shortlog, the following people contributed to the 2.52.0 release. Thank you to all contributors!

... (truncated)

Changelog

Sourced from org.apache.beam:beam-sdks-java-io-google-cloud-platform's changelog.

[2.52.0] - 2023-11-17

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621).
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

Known issues

  • MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).

[2.51.0] - 2023-10-03

New Features / Improvements

... (truncated)

Commits

Updates org.apache.beam:beam-runners-direct-java from 2.41.0 to 2.52.0

Release notes

Sourced from org.apache.beam:beam-runners-direct-java's releases.

Beam 2.52.0 release

We are happy to present the new 2.52.0 release of Beam. This release includes both improvements and new functionality. See the download page for this release.

For more information on changes in 2.52.0, check out the detailed release notes.

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621.
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

List of Contributors

According to git shortlog, the following people contributed to the 2.52.0 release. Thank you to all contributors!

... (truncated)

Changelog

Sourced from org.apache.beam:beam-runners-direct-java's changelog.

[2.52.0] - 2023-11-17

Highlights

  • Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package. Please, use beam-sdks-java-extensions-avro instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252).
  • Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
    • Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.

New Features / Improvements

  • Add UseDataStreamForBatch pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API.
  • upload_graph as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621).
  • state amd side input cache has been enabled to a default of 100 MB. Use --max_cache_memory_usage_mb=X to provide cache size for the user state API and side inputs. (Python) (#28770).
  • Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.

Breaking Changes

  • org.apache.beam.sdk.io.CountingSource.CounterMark uses custom CounterMarkCoder as a default coder since all Avro-dependent classes finally moved to extensions/avro. In case if it's still required to use AvroCoder for CounterMark, then, as a workaround, a copy of "old" CountingSource class should be placed into a project code and used directly (#25252).
  • Renamed host to firestoreHost in FirestoreOptions to avoid potential conflict of command line arguments (Java) (#29201).

Bugfixes

  • Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
  • watch_file_pattern arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of arg watch_file_pattern prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use WatchFilePattern PTransform as a SideInput. (#28948)
  • MLTransform doesn't output artifacts such as min, max and quantiles. Instead, MLTransform will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the eariler MLTransform, use read_artifact_location of MLTransform, which reads artifacts that were produced earlier in a different MLTransform (#29016)
  • Fixed a memory leak, which affected some long-running Python pipelines: #28246.

Security Fixes

Known issues

  • MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).

[2.51.0] - 2023-10-03

New Features / Improvements

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps `beam.version` from 2.41.0 to 2.52.0.

Updates `org.apache.beam:beam-sdks-java-core` from 2.41.0 to 2.52.0
- [Release notes](https://github.com/apache/beam/releases)
- [Changelog](https://github.com/apache/beam/blob/master/CHANGES.md)
- [Commits](apache/beam@v2.41.0...v2.52.0)

Updates `org.apache.beam:beam-runners-google-cloud-dataflow-java` from 2.41.0 to 2.52.0
- [Release notes](https://github.com/apache/beam/releases)
- [Changelog](https://github.com/apache/beam/blob/master/CHANGES.md)
- [Commits](apache/beam@v2.41.0...v2.52.0)

Updates `org.apache.beam:beam-sdks-java-io-google-cloud-platform` from 2.41.0 to 2.52.0
- [Release notes](https://github.com/apache/beam/releases)
- [Changelog](https://github.com/apache/beam/blob/master/CHANGES.md)
- [Commits](apache/beam@v2.41.0...v2.52.0)

Updates `org.apache.beam:beam-runners-direct-java` from 2.41.0 to 2.52.0
- [Release notes](https://github.com/apache/beam/releases)
- [Changelog](https://github.com/apache/beam/blob/master/CHANGES.md)
- [Commits](apache/beam@v2.41.0...v2.52.0)

---
updated-dependencies:
- dependency-name: org.apache.beam:beam-sdks-java-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.beam:beam-runners-google-cloud-dataflow-java
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.beam:beam-sdks-java-io-google-cloud-platform
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.beam:beam-runners-direct-java
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Dec 11, 2023
Copy link

codecov bot commented Dec 11, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (2c61f34) 89.35% compared to head (8211641) 89.35%.

Additional details and impacted files
@@            Coverage Diff            @@
##             master      #56   +/-   ##
=========================================
  Coverage     89.35%   89.35%           
  Complexity      266      266           
=========================================
  Files            42       42           
  Lines          1137     1137           
  Branches         64       64           
=========================================
  Hits           1016     1016           
  Misses           86       86           
  Partials         35       35           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

0 participants