Skip to content

Releases: zio/zio-query

v0.2.5

06 Sep 12:17
77a694d
Compare
Choose a tag to compare

This release contains optimizations to the performance of parallel queries.

v0.2.4

04 Aug 00:26
f1bdf2f
Compare
Choose a tag to compare

This release contains support for ZIO 1.0.0.

v0.2.3

27 Jun 04:32
c138c07
Compare
Choose a tag to compare

This release contains support for ZIO 1.0.0-RC21-1. It also improves the performance of large queries. Upgrading is recommended for all users.

v0.2.2

02 Jun 01:24
3111662
Compare
Choose a tag to compare

This release contains a fix for a bug that could cause queries to fail when using ZQuery.collectAllPar with multiple data sources.

v0.2.1

26 May 13:12
ac671d5
Compare
Choose a tag to compare

This release upgrades ZIO Query to support ZIO 1.0.0-RC20.

Other changes:

v0.2.0

17 May 11:21
609786f
Compare
Choose a tag to compare

This release contains support for pipelining and is also the first release since the move to the ZIO organization for users of Caliban. Below are notes regarding pipelining as well as a migration guide for users of previous versions. If you experience issues with these new features or with migrating please reach out on Discord. Thank you for your support!

Pipelining

ZQuery now supports pipelining. This means that requests that are independent of each other but must be performed sequentially can now be combined together. For example, consider the following query:

incr("some_key") *> ping *> decr("some_other_key")

Previously, this would have resulted in three separate requests because the parts of the composite query must be performed sequentially and so cannot be batched. But the three parts of the query are independent of each other so we can actually combine them together into one request as long as the data source executes them in order. ZQuery will now automatically detect this. To support pipelining, the signature of DataSource has been changed to:

trait DataSource[-R, -A] {
  def runAll(requests: Chunk[Chunk[A]]): ZIO[R, Nothing, CompletedRequestsMap]
}

Here the outer Chunk describes batches of requests that must be performed sequentially and each inner Chunk describes a batch of requests that can be performed in parallel. This allows data sources to "look ahead" and perform optimizations based on requests they know they will have to execute. For data sources that execute requests that can be performed in parallel in batches but do not further optimize batches of requests that must be performed sequentially you can use the simplified Batched trait:

trait Batched[-R, +A] extends DataSource[R, A] {
  def run(requests: Chunk[A]): ZIO[R, Nothing, CompletedRequestsMap]
}

Migration Guide

Support for pipelining brings some changes to the encoding of data sources, and for users of Caliban this is also the first release since ZQuery moved to the ZIO organization. Here are the changes you need to make to get going on the latest version.

Imports

For users of Caliban, ZQuery has moved from the zquery package to the zio.query package. As such, change any imports from import zquery._ to import zio.query._.

Data Sources

The signature of DataSource has changed from taking an Iterable[A] to taking a Chunk[Chunk[A]] to reflect that data sources can now handle both sequential and parallel requests. If you are extending DataSource directly, change your code from:

val UserDataSource = new DataSource[Any, GetUserName] {
 val identifier: String = ???
 def run(requests: Iterable[UserRequest]): ZIO[Any, Throwable, CompletedRequestMap] = ???
}

to:

val UserDataSource = new DataSource.Batched[Any, GetUserName] {
 val identifier: String = ???
 def run(requests: Chunk[UserRequest]): ZIO[Any, Throwable, CompletedRequestMap] = ???
}

Notice that we changed the type from DataSource to DataSource.Batched to reflect that our data source does batches requests that can be performed in parallel but does not further optimize batches of requests that must be performed sequentially. We also changed the signature of run to accept a Chunk instead of an Iterable. On the latest ZIO version Chunk extends IndexedSeq, so any code you wrote to work with an Iterable will still work, and you can now take advantage of the additional structure of Chunk.

If you are using one of the "batched" DataSource constructors (e.g. fromFunctionBatched, fromFunctionBatchedM, fromFunctionBatchedOption, fromFunctionBatchedOptionM, fromFunctionBatchedWith, or fromFunctionBatchedWithM) note that these constructors now expect you to provide a function that returns a Chunk. Since these constructors all work with functions that accept a Chunk this will normally just work but if you need to you can always convert an Iterable into a Chunk using Chunk.fromIterable.

v0.1.0

11 May 03:02
23d7df2
Compare
Choose a tag to compare

Initial release.