Skip to content

Releases: googleapis/google-cloud-java

0.2.8

25 Aug 18:31
Compare
Choose a tag to compare

Features

Datastore

  • gcloud-java-datastore now uses Datastore v1 (#1169)

Translate

  • gcloud-java-translate, a new client library to interact with Google Translate, is released and is in alpha. See the docs for more information.
    See TranslateExample for a complete example or API Documentation for gcloud-java-translate javadoc.
    The following snippet shows how to detect the language of some text and how to translate some text.
    Complete source code can be found on
    DetectLanguageAndTranslate.java.
import com.google.cloud.translate.Detection;
import com.google.cloud.translate.Translate;
import com.google.cloud.translate.Translate.TranslateOption;
import com.google.cloud.translate.TranslateOptions;
import com.google.cloud.translate.Translation;

Translate translate = TranslateOptions.defaultInstance().service();

Detection detection = translate.detect("Hola");
String detectedLanguage = detection.language();

Translation translation = translate.translate(
    "World",
    TranslateOption.sourceLanguage("en"),
    TranslateOption.targetLanguage(detectedLanguage));

System.out.printf("Hola %s%n", translation.translatedText());

Fixes

Core

  • SocketException and "insufficient data written" IOException are now retried (#1187)

Storage NIO

  • Enumerating filesystems no longer fails if gcloud-java-nio is in the classpath and no credentials are available (#1189)
  • Rename CloudStorageFileSystemProvider.setGCloudOptions to CloudStorageFileSystemProvider.setStorageOptions (#1189)

0.2.7

11 Aug 13:54
Compare
Choose a tag to compare

Fixes

BigQuery

  • String setters for DeprecationStatus timestamps are removed from DeprecationStatus.Builder. Getters are still available in DeprecationStatus for legacy support (#1127).
  • Fix table's StreamingBuffer to allow oldestEntryTime to be null (#1141).
  • Add support for useLegacySql to QueryRequest and QueryJobConfiguration (#1142).

Datastore

  • Fix Datastore exceptions conversion: use getNumber() instead of ordinal() to get DatastoreException's error code (#1140).
  • Use HTTP transport factory, as set via DatastoreOptions, to perform service requests (#1144).

Logging

  • Set gcloud-java user agent in gcloud-java-logging, as done for other modules (#1147).

PubSub

  • Change Pub/Sub endpoint from pubsub-experimental.googleapis.com to pubsub.googleapis.com (#1149).

0.2.6

21 Jul 18:07
Compare
Choose a tag to compare

Features

BigQuery

  • Add support for time-partitioned tables. For example, you can now create a time partitioned table using the following code:
TableId tableId = TableId.of(datasetName, tableName);
TimePartitioning partitioning = TimePartitioning.of(Type.DAY);
// You can also set the expiration
// TimePartitioning partitioning = TimePartitioning.of(Type.DAY, 2592000000);
StandardTableDefinition tableDefinition = StandardTableDefinition.builder()
    .schema(tableSchema)
    .timePartitioning(partitioning)
    .build();
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));

Logging

  • gcloud-java-logging, a new client library to interact with Stackdriver Logging, is released and is in alpha. See the docs for more information.
    gcloud-java-logging uses gRPC as transport layer, which is not (yet) supported by App Engine Standard. gcloud-java-logging will work on App Engine Flexible.
    See LoggingExample for a complete example or API Documentation for gcloud-java-logging javadoc.
    The following snippet shows how to write and list log entries. Complete source code can be found on
    WriteAndListLogEntries.java.
import com.google.cloud.MonitoredResource;
import com.google.cloud.Page;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.cloud.logging.Logging.EntryListOption;
import com.google.cloud.logging.LoggingOptions;
import com.google.cloud.logging.Payload.StringPayload;

import java.util.Collections;
import java.util.Iterator;

LoggingOptions options = LoggingOptions.defaultInstance();
try(Logging logging = options.service()) {

  LogEntry firstEntry = LogEntry.builder(StringPayload.of("message"))
      .logName("test-log")
      .resource(MonitoredResource.builder("global")
          .addLabel("project_id", options.projectId())
          .build())
      .build();
  logging.write(Collections.singleton(firstEntry));

  Page<LogEntry> entries = logging.listLogEntries(
      EntryListOption.filter("logName=projects/" + options.projectId() + "/logs/test-log"));
  Iterator<LogEntry> entryIterator = entries.iterateAll();
  while (entryIterator.hasNext()) {
    System.out.println(entryIterator.next());
  }
}

The following snippet, instead, shows how to use a java.util.logging.Logger to write log entries to Stackdriver Logging. The snippet installs a Stackdriver Logging handler using
LoggingHandler.addHandler(Logger, LoggingHandler). Notice that this could also be done through the logging.properties file, adding the following line:

com.google.cloud.examples.logging.snippets.AddLoggingHandler.handlers=com.google.cloud.logging.LoggingHandler}

The complete code can be found on AddLoggingHandler.java.

import com.google.cloud.logging.LoggingHandler;

import java.util.logging.Logger;

Logger logger = Logger.getLogger(AddLoggingHandler.class.getName());
LoggingHandler.addHandler(logger, new LoggingHandler());
logger.warning("test warning");

0.2.5

03 Jul 18:35
Compare
Choose a tag to compare

Features

Storage NIO

  • gcloud-java-nio, a new client library that allows to interact with Google Cloud Storage using Java's NIO API, is released and is in alpha. Not all NIO features have been implemented yet, see the docs for more information.
    The simplest way to get started with gcloud-java-nio is with Paths and Files:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);

InputStream and OutputStream can also be used for streaming:

Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
try (InputStream input = Files.newInputStream(path)) {
  // use input stream
}

To configure a bucket per-environment, you can use the FileSystem API:

FileSystem fs = FileSystems.getFileSystem(URI.create("gs://bucket"));
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);

If you don't want to rely on Java SPI, which requires a META-INF file in your jar generated by Google Auto, you can instantiate this file system directly as follows:

CloudStorageFileSystem fs = CloudStorageFileSystem.forBucket("bucket");
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
data = Files.readAllBytes(path);

For instructions on how to add Google Cloud Storage NIO support to a legacy jar see this example. For more examples see here.

Fixes

Storage

  • Fix BlobReadChannel to support reading and seeking files larger than Integer.MAX_VALUE bytes

0.2.4

29 Jun 09:49
Compare
Choose a tag to compare

Features

Pub/Sub

  • gcloud-java-pubsub, a new client library to interact with Google Cloud Pub/Sub, is released and is in alpha. See the docs for more information.
    gcloud-java-pubsub uses gRPC as transport layer, which is not (yet) supported by App Engine Standard. gcloud-java-pubsub will work on App Engine Flexible.
    See PubSubExample for a complete example or API Documentation for gcloud-java-pubsub javadoc.
    The following snippet shows how to create a Pub/Sub topic and asynchronously publish messages to it. See CreateTopicAndPublishMessages.java for the full source code.
  try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
    Topic topic = pubsub.create(TopicInfo.of("test-topic"));
    Message message1 = Message.of("First message");
    Message message2 = Message.of("Second message");
    topic.publishAsync(message1, message2);
  }

The following snippet, instead, shows how to create a Pub/Sub pull subscription and asynchronously pull messages from it. See CreateSubscriptionAndPullMessages.java for the full source code.

  try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
    Subscription subscription =
        pubsub.create(SubscriptionInfo.of("test-topic", "test-subscription"));
    MessageProcessor callback = new MessageProcessor() {
      @Override
      public void process(Message message) throws Exception {
        System.out.printf("Received message \"%s\"%n", message.payloadAsString());
      }
    };
    // Create a message consumer and pull messages (for 60 seconds)
    try (MessageConsumer consumer = subscription.pullAsync(callback)) {
      Thread.sleep(60_000);
    }
  }

0.2.3

10 Jun 16:16
Compare
Choose a tag to compare

Features

BigQuery

  • Add support for the BYTES datatype. A field of type BYTES can be created by using Field.Value.bytes(). The byte[] bytesValue() method is added to FieldValue to return the value of a field as a byte array.
  • A Job waitFor(WaitForOption... waitOptions) method is added to Job class. This method waits for the job to complete and returns job's updated information:
Job completedJob = job.waitFor();
if (completedJob == null) {
  // job no longer exists
} else if (completedJob.status().error() != null) {
  // job failed, handle error
} else {
  // job completed successfully
}

By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.

Core

Compute

  • A Operation waitFor(WaitForOption... waitOptions) method is added to Operation class. This method waits for the operation to complete and returns operation's updated information:
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
  // operation no longer exists
} else if (completedOperation.errors() != null) {
  // operation failed, handle error
} else {
  // operation completed successfully
}

By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.

Datastore

Fixes

Storage

  • StorageExample now contains examples on how to add ACLs to blobs and buckets (#1033).
  • BlobInfo.createTime() getter has been added. This method returns the time at which a blob was created (#1034).

0.2.2

20 May 21:35
Compare
Choose a tag to compare

Features

Core

  • Clock abstract class is moved out of ServiceOptions. ServiceOptions.clock() is now used by RetryHelper in all service calls. This enables mocking the Clock source used for retries when testing your code.

Storage

  • Refactor storage batches to use the common BatchResult class. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
  @Override
  public void success(Boolean result) {
    // handle delete result
  }

  @Override
  public void error(StorageException exception) {
    // handle exception
  }
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageException

Fixes

Datastore

  • Update datastore client to accept IP addresses for localhost (#1002).
  • LocalDatastoreHelper now uses https to download the emulator - thanks to @pehrs (#942).
  • Add example on embedded entities to DatastoreExample (#980).

Storage

  • Fix StorageImpl.signUrl for blob names that start with "/" - thanks to @clementdenis (#1013).
  • Fix readAllBytes permission error on Google AppEngine (#1010).

0.2.1

29 Apr 22:11
Compare
Choose a tag to compare

Features

Compute

  • gcloud-java-compute, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation for gcloud-java-compute javadoc.
    The following snippet shows how to create a region external IP address, a persistent boot disk and a virtual machine instance that uses both the IP address and the persistent disk. See CreateAddressDiskAndInstance.java for the full source code.
    // Create a service object
    // Credentials are inferred from the environment.
    Compute compute = ComputeOptions.defaultInstance().service();

    // Create an external region address
    RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
    Operation operation = compute.create(AddressInfo.of(addressId));
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Address " + addressId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Address creation failed");
    }

    // Create a persistent disk
    ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
    DiskId diskId = DiskId.of("us-central1-a", "test-disk");
    ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
    DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
    operation = compute.create(disk);
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Disk " + diskId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Disk creation failed");
    }

    // Create a virtual machine instance
    Address externalIp = compute.getAddress(addressId);
    InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
    NetworkId networkId = NetworkId.of("default");
    PersistentDiskConfiguration attachConfiguration =
        PersistentDiskConfiguration.builder(diskId).boot(true).build();
    AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
    NetworkInterface networkInterface = NetworkInterface.builder(networkId)
        .accessConfigurations(AccessConfig.of(externalIp.address()))
        .build();
    MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
    InstanceInfo instance =
        InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
    operation = compute.create(instance);
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Instance " + instanceId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Instance creation failed");
    }

Datastore

  • options(String namespace) method has been added to LocalDatastoreHelper allowing to create testing options for a specific namespace (#936).
  • of methods have been added to ListValue to support specific types (String, long, double, boolean, DateTime, LatLng, Key, FullEntity and Blob). addValue methods have been added to ListValue.Builder to support the same set of specific types (#934).

DNS

  • Support for batches has been added to gcloud-java-dns (#940). Batches allow to perform a number of operations in one single RPC request.

Fixes

Core

  • The causing exception is now chained in BaseServiceException.getCause() (#774).

0.2.0

12 Apr 18:53
Compare
Choose a tag to compare

Features

General

  • gcloud-java has been repackaged. com.google.gcloud has now changed to com.google.cloud, and we're releasing our artifacts on maven under the Group ID com.google.cloud rather than com.google.gcloud. The new way to add our library as a dependency in your project is as follows:

If you're using Maven, add this to your pom.xml file

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>gcloud-java</artifactId>
  <version>0.2.0</version>
</dependency>

If you are using Gradle, add this to your dependencies

compile 'com.google.cloud:gcloud-java:0.2.0'

If you are using SBT, add this to your dependencies

libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"

Storage

  • The interface ServiceAccountSigner was added. Both AppEngineAuthCredentials and ServiceAccountAuthCredentials extend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).

Fixes

General

  • The default RPC retry parameters were changed to align with the backoff policy requirement listed in the Service Level Agreements (SLAs) for Cloud BigQuery, and Cloud Datastore, and Cloud Storage (#857, #860).
  • The expiration date is now properly populated for App Engine credentials (#873, #894).
  • gcloud-java now uses the project ID given in the credentials file specified by the environment variable GOOGLE_APPLICATION_CREDENTIALS (if set) (#845).

BigQuery

  • Job's isDone method is fixed to return true if the job is complete or the job doesn't exist (#853).

Datastore

  • LocalGcdHelper has been renamed to RemoteDatastoreHelper, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via the create, start, and stop methods (#821).
  • ListValue no longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).

DNS

  • There were some minor changes to ChangeRequest, namely adding reload/isDone methods and changing the method signature of applyTo (#849).

Storage

  • RemoteGcsHelper was renamed to RemoteStorageHelper to be more consistent with other modules' test helpers (#821).

0.1.7

02 Apr 00:01
Compare
Choose a tag to compare

Features

Datastore

  • gcloud-java-datastore now uses Cloud Datastore v1beta3. You can read more about updates in Datastore v1beta3 here. Note that to use this new API, you may have to re-enable the Google Cloud Datastore API in the Developers Console. The following API changes are coupled with this update.
    • Entity-related changes:
      • Entities are indexed by default, and indexed has been changed to excludeFromIndexes. Properties of type EntityValue and type ListValue can now be indexed. Moreover, indexing and querying properties inside of entity values is now supported. Values inside entity values are indexed by default.
      • LatLng and LatLngValue, representing the new property type for latitude & longitude, are added.
      • The getter for a value's meaning has been made package scope instead of public, as it is a deprecated field.
    • Read/write-related changes:
      • Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the BatchOption and TransactionOption classes are now removed.
      • ReadOption is added to allow users to specify eventual consistency on Datastore reads. This can be a useful optimization when strongly consistent results for get/fetch or ancestor queries aren't necessary.
    • Query-related changes:
      • QueryResults.cursorAfter() is updated to point to the position after the last consumed result. In v1beta2, cursorAfter was only updated after all results were consumed.
      • groupBy is replaced by distinctOn.
      • The Projection class in StructuredQuery is replaced with a string representing the property name. Aggregation functions are removed.
      • There are changes in GQL syntax:
        • In synthetic literal KEY, DATASET is now PROJECT.
        • The BLOBKEY synthetic literal is removed.
        • The FIRST aggregator is removed.
        • The GROUP BY clause is replaced with DISTINCT ON.
        • Fully-qualified property names are now supported.
        • Query filters on timestamps prior to the epoch are now supported.
    • Other miscellaneous changes:
      • The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use gcloud-java-datastore.
      • The default value for namespace is now an empty string rather than null.

Fixes

General

  • In gcloud-java-bigquery, gcloud-java-dns, and gcloud-java-storage, the field id() has been renamed to generatedId for classes that are assigned ids from the service.

Datastore

  • Issue #548 (internal errors when trying to load large numbers of entities without setting a limit) is fixed. The work around mentioned in that issue is no longer necessary.