Releases: googleapis/google-cloud-java
0.2.8
Features
Datastore
gcloud-java-datastore
now uses Datastore v1 (#1169)
Translate
gcloud-java-translate
, a new client library to interact with Google Translate, is released and is in alpha. See the docs for more information.
See TranslateExample for a complete example or API Documentation forgcloud-java-translate
javadoc.
The following snippet shows how to detect the language of some text and how to translate some text.
Complete source code can be found on
DetectLanguageAndTranslate.java.
import com.google.cloud.translate.Detection;
import com.google.cloud.translate.Translate;
import com.google.cloud.translate.Translate.TranslateOption;
import com.google.cloud.translate.TranslateOptions;
import com.google.cloud.translate.Translation;
Translate translate = TranslateOptions.defaultInstance().service();
Detection detection = translate.detect("Hola");
String detectedLanguage = detection.language();
Translation translation = translate.translate(
"World",
TranslateOption.sourceLanguage("en"),
TranslateOption.targetLanguage(detectedLanguage));
System.out.printf("Hola %s%n", translation.translatedText());
Fixes
Core
SocketException
and "insufficient data written"IOException
are now retried (#1187)
Storage NIO
0.2.7
Fixes
BigQuery
- String setters for
DeprecationStatus
timestamps are removed fromDeprecationStatus.Builder
. Getters are still available inDeprecationStatus
for legacy support (#1127). - Fix table's
StreamingBuffer
to allowoldestEntryTime
to benull
(#1141). - Add support for
useLegacySql
toQueryRequest
andQueryJobConfiguration
(#1142).
Datastore
- Fix Datastore exceptions conversion: use
getNumber()
instead ofordinal()
to getDatastoreException
's error code (#1140). - Use HTTP transport factory, as set via
DatastoreOptions
, to perform service requests (#1144).
Logging
- Set
gcloud-java
user agent ingcloud-java-logging
, as done for other modules (#1147).
PubSub
- Change Pub/Sub endpoint from
pubsub-experimental.googleapis.com
topubsub.googleapis.com
(#1149).
0.2.6
Features
BigQuery
- Add support for time-partitioned tables. For example, you can now create a time partitioned table using the following code:
TableId tableId = TableId.of(datasetName, tableName);
TimePartitioning partitioning = TimePartitioning.of(Type.DAY);
// You can also set the expiration
// TimePartitioning partitioning = TimePartitioning.of(Type.DAY, 2592000000);
StandardTableDefinition tableDefinition = StandardTableDefinition.builder()
.schema(tableSchema)
.timePartitioning(partitioning)
.build();
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));
Logging
gcloud-java-logging
, a new client library to interact with Stackdriver Logging, is released and is in alpha. See the docs for more information.
gcloud-java-logging
uses gRPC as transport layer, which is not (yet) supported by App Engine Standard.gcloud-java-logging
will work on App Engine Flexible.
See LoggingExample for a complete example or API Documentation forgcloud-java-logging
javadoc.
The following snippet shows how to write and list log entries. Complete source code can be found on
WriteAndListLogEntries.java.
import com.google.cloud.MonitoredResource;
import com.google.cloud.Page;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.cloud.logging.Logging.EntryListOption;
import com.google.cloud.logging.LoggingOptions;
import com.google.cloud.logging.Payload.StringPayload;
import java.util.Collections;
import java.util.Iterator;
LoggingOptions options = LoggingOptions.defaultInstance();
try(Logging logging = options.service()) {
LogEntry firstEntry = LogEntry.builder(StringPayload.of("message"))
.logName("test-log")
.resource(MonitoredResource.builder("global")
.addLabel("project_id", options.projectId())
.build())
.build();
logging.write(Collections.singleton(firstEntry));
Page<LogEntry> entries = logging.listLogEntries(
EntryListOption.filter("logName=projects/" + options.projectId() + "/logs/test-log"));
Iterator<LogEntry> entryIterator = entries.iterateAll();
while (entryIterator.hasNext()) {
System.out.println(entryIterator.next());
}
}
The following snippet, instead, shows how to use a java.util.logging.Logger
to write log entries to Stackdriver Logging. The snippet installs a Stackdriver Logging handler using
LoggingHandler.addHandler(Logger, LoggingHandler)
. Notice that this could also be done through the logging.properties
file, adding the following line:
com.google.cloud.examples.logging.snippets.AddLoggingHandler.handlers=com.google.cloud.logging.LoggingHandler}
The complete code can be found on AddLoggingHandler.java.
import com.google.cloud.logging.LoggingHandler;
import java.util.logging.Logger;
Logger logger = Logger.getLogger(AddLoggingHandler.class.getName());
LoggingHandler.addHandler(logger, new LoggingHandler());
logger.warning("test warning");
0.2.5
Features
Storage NIO
gcloud-java-nio
, a new client library that allows to interact with Google Cloud Storage using Java's NIO API, is released and is in alpha. Not all NIO features have been implemented yet, see the docs for more information.
The simplest way to get started withgcloud-java-nio
is withPaths
andFiles
:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);
InputStream
and OutputStream
can also be used for streaming:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
try (InputStream input = Files.newInputStream(path)) {
// use input stream
}
To configure a bucket per-environment, you can use the FileSystem
API:
FileSystem fs = FileSystems.getFileSystem(URI.create("gs://bucket"));
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);
If you don't want to rely on Java SPI, which requires a META-INF file in your jar generated by Google Auto, you can instantiate this file system directly as follows:
CloudStorageFileSystem fs = CloudStorageFileSystem.forBucket("bucket");
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
data = Files.readAllBytes(path);
For instructions on how to add Google Cloud Storage NIO support to a legacy jar see this example. For more examples see here.
Fixes
Storage
- Fix
BlobReadChannel
to support reading and seeking files larger thanInteger.MAX_VALUE
bytes
0.2.4
Features
Pub/Sub
gcloud-java-pubsub
, a new client library to interact with Google Cloud Pub/Sub, is released and is in alpha. See the docs for more information.
gcloud-java-pubsub
uses gRPC as transport layer, which is not (yet) supported by App Engine Standard.gcloud-java-pubsub
will work on App Engine Flexible.
See PubSubExample for a complete example or API Documentation forgcloud-java-pubsub
javadoc.
The following snippet shows how to create a Pub/Sub topic and asynchronously publish messages to it. See CreateTopicAndPublishMessages.java for the full source code.
try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
Topic topic = pubsub.create(TopicInfo.of("test-topic"));
Message message1 = Message.of("First message");
Message message2 = Message.of("Second message");
topic.publishAsync(message1, message2);
}
The following snippet, instead, shows how to create a Pub/Sub pull subscription and asynchronously pull messages from it. See CreateSubscriptionAndPullMessages.java for the full source code.
try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
Subscription subscription =
pubsub.create(SubscriptionInfo.of("test-topic", "test-subscription"));
MessageProcessor callback = new MessageProcessor() {
@Override
public void process(Message message) throws Exception {
System.out.printf("Received message \"%s\"%n", message.payloadAsString());
}
};
// Create a message consumer and pull messages (for 60 seconds)
try (MessageConsumer consumer = subscription.pullAsync(callback)) {
Thread.sleep(60_000);
}
}
0.2.3
Features
BigQuery
- Add support for the
BYTES
datatype. A field of typeBYTES
can be created by usingField.Value.bytes()
. Thebyte[] bytesValue()
method is added toFieldValue
to return the value of a field as a byte array. - A
Job waitFor(WaitForOption... waitOptions)
method is added toJob
class. This method waits for the job to complete and returns job's updated information:
Job completedJob = job.waitFor();
if (completedJob == null) {
// job no longer exists
} else if (completedJob.status().error() != null) {
// job failed, handle error
} else {
// job completed successfully
}
By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit)
can be used. WaitForOption.timeout(long, TimeUnit)
, instead, sets the maximum time to wait.
Core
AuthCredentials.createFor(String)
andAuthCredentials.createFor(String, Date)
methods have been added to createAuthCredentials
objects given an OAuth2 access token (and possibly its expiration date).
Compute
- A
Operation waitFor(WaitForOption... waitOptions)
method is added toOperation
class. This method waits for the operation to complete and returns operation's updated information:
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
// operation no longer exists
} else if (completedOperation.errors() != null) {
// operation failed, handle error
} else {
// operation completed successfully
}
By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit)
can be used. WaitForOption.timeout(long, TimeUnit)
, instead, sets the maximum time to wait.
Datastore
Datastore.put
andDatastoreBatchWriter.put
now support entities with incomplete keys. Bothput
methods return the just updated/created entities. AputWithDeferredIdAllocation
method has been also added toDatastoreBatchWriter
.
Fixes
Storage
0.2.2
Features
Core
Clock
abstract class is moved out ofServiceOptions
.ServiceOptions.clock()
is now used byRetryHelper
in all service calls. This enables mocking theClock
source used for retries when testing your code.
Storage
- Refactor storage batches to use the common
BatchResult
class. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
@Override
public void success(Boolean result) {
// handle delete result
}
@Override
public void error(StorageException exception) {
// handle exception
}
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageException
Fixes
Datastore
- Update datastore client to accept IP addresses for localhost (#1002).
LocalDatastoreHelper
now uses https to download the emulator - thanks to @pehrs (#942).- Add example on embedded entities to
DatastoreExample
(#980).
Storage
- Fix
StorageImpl.signUrl
for blob names that start with "/" - thanks to @clementdenis (#1013). - Fix
readAllBytes
permission error on Google AppEngine (#1010).
0.2.1
Features
Compute
gcloud-java-compute
, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation forgcloud-java-compute
javadoc.
The following snippet shows how to create a region external IP address, a persistent boot disk and a virtual machine instance that uses both the IP address and the persistent disk. See CreateAddressDiskAndInstance.java for the full source code.
// Create a service object
// Credentials are inferred from the environment.
Compute compute = ComputeOptions.defaultInstance().service();
// Create an external region address
RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
Operation operation = compute.create(AddressInfo.of(addressId));
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Address " + addressId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Address creation failed");
}
// Create a persistent disk
ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
DiskId diskId = DiskId.of("us-central1-a", "test-disk");
ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
operation = compute.create(disk);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Disk " + diskId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Disk creation failed");
}
// Create a virtual machine instance
Address externalIp = compute.getAddress(addressId);
InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
NetworkId networkId = NetworkId.of("default");
PersistentDiskConfiguration attachConfiguration =
PersistentDiskConfiguration.builder(diskId).boot(true).build();
AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
NetworkInterface networkInterface = NetworkInterface.builder(networkId)
.accessConfigurations(AccessConfig.of(externalIp.address()))
.build();
MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
InstanceInfo instance =
InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
operation = compute.create(instance);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Instance " + instanceId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Instance creation failed");
}
Datastore
options(String namespace)
method has been added toLocalDatastoreHelper
allowing to create testing options for a specific namespace (#936).of
methods have been added toListValue
to support specific types (String
,long
,double
,boolean
,DateTime
,LatLng
,Key
,FullEntity
andBlob
).addValue
methods have been added toListValue.Builder
to support the same set of specific types (#934).
DNS
- Support for batches has been added to
gcloud-java-dns
(#940). Batches allow to perform a number of operations in one single RPC request.
Fixes
Core
- The causing exception is now chained in
BaseServiceException.getCause()
(#774).
0.2.0
Features
General
gcloud-java
has been repackaged.com.google.gcloud
has now changed tocom.google.cloud
, and we're releasing our artifacts on maven under the Group IDcom.google.cloud
rather thancom.google.gcloud
. The new way to add our library as a dependency in your project is as follows:
If you're using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>gcloud-java</artifactId>
<version>0.2.0</version>
</dependency>
If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:gcloud-java:0.2.0'
If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"
Storage
- The interface
ServiceAccountSigner
was added. BothAppEngineAuthCredentials
andServiceAccountAuthCredentials
extend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).
Fixes
General
- The default RPC retry parameters were changed to align with the backoff policy requirement listed in the Service Level Agreements (SLAs) for Cloud BigQuery, and Cloud Datastore, and Cloud Storage (#857, #860).
- The expiration date is now properly populated for App Engine credentials (#873, #894).
gcloud-java
now uses the project ID given in the credentials file specified by the environment variableGOOGLE_APPLICATION_CREDENTIALS
(if set) (#845).
BigQuery
Job
'sisDone
method is fixed to return true if the job is complete or the job doesn't exist (#853).
Datastore
LocalGcdHelper
has been renamed toRemoteDatastoreHelper
, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via thecreate
,start
, andstop
methods (#821).ListValue
no longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).
DNS
- There were some minor changes to
ChangeRequest
, namely addingreload
/isDone
methods and changing the method signature ofapplyTo
(#849).
Storage
RemoteGcsHelper
was renamed toRemoteStorageHelper
to be more consistent with other modules' test helpers (#821).
0.1.7
Features
Datastore
gcloud-java-datastore
now uses Cloud Datastore v1beta3. You can read more about updates in Datastore v1beta3 here. Note that to use this new API, you may have to re-enable the Google Cloud Datastore API in the Developers Console. The following API changes are coupled with this update.- Entity-related changes:
- Entities are indexed by default, and
indexed
has been changed toexcludeFromIndexes
. Properties of typeEntityValue
and typeListValue
can now be indexed. Moreover, indexing and querying properties inside of entity values is now supported. Values inside entity values are indexed by default. LatLng
andLatLngValue
, representing the new property type for latitude & longitude, are added.- The getter for a value's
meaning
has been made package scope instead of public, as it is a deprecated field.
- Entities are indexed by default, and
- Read/write-related changes:
- Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the
BatchOption
andTransactionOption
classes are now removed. ReadOption
is added to allow users to specify eventual consistency on Datastore reads. This can be a useful optimization when strongly consistent results forget
/fetch
or ancestor queries aren't necessary.
- Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the
- Query-related changes:
QueryResults.cursorAfter()
is updated to point to the position after the last consumed result. In v1beta2,cursorAfter
was only updated after all results were consumed.groupBy
is replaced bydistinctOn
.- The
Projection
class inStructuredQuery
is replaced with a string representing the property name. Aggregation functions are removed. - There are changes in GQL syntax:
- In synthetic literal KEY, DATASET is now PROJECT.
- The BLOBKEY synthetic literal is removed.
- The FIRST aggregator is removed.
- The GROUP BY clause is replaced with DISTINCT ON.
- Fully-qualified property names are now supported.
- Query filters on timestamps prior to the epoch are now supported.
- Other miscellaneous changes:
- The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use
gcloud-java-datastore
. - The default value for namespace is now an empty string rather than null.
- The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use
- Entity-related changes:
Fixes
General
- In
gcloud-java-bigquery
,gcloud-java-dns
, andgcloud-java-storage
, the fieldid()
has been renamed togeneratedId
for classes that are assignedid
s from the service.
Datastore
- Issue #548 (internal errors when trying to load large numbers of entities without setting a limit) is fixed. The work around mentioned in that issue is no longer necessary.