Note
This is the code for Gemini in Java with Vertex AI and LangChain4j codelab geared towards Java developers to discover Gemini and its open-source variant Gemma Large Language Model by Google using LangChain4j framework.
There's also Python versions of these samples in python folder.
The code examples have been tested on the following environment:
- Java 21
- Gradle 8.6
In order to run these examples, you need to have a Google Cloud account and project ready.
You also need to make sure the Vertex AI is enabled:
gcloud services enable aiplatform.googleapis.com
Before running the examples, you'll need to set up two environment variables:
export PROJECT_ID=YOUR_PROJECT_ID
export LOCATION=us-central1
Warning
Be sure to update the project ID and location to match your project.
Create the Gradle wrapper:
gradle wrapper
These are the list of samples for different use cases:
-
./gradlew run -q -DjavaMainClass=gemini.workshop.QA
-
Simple Question & Answer via streaming
./gradlew run -q -DjavaMainClass=gemini.workshop.StreamQA
-
Hold a conversation with a chatbot
./gradlew run -q -DjavaMainClass=gemini.workshop.Conversation
-
Describing an image with multimodality (text+image)
./gradlew run -q -DjavaMainClass=gemini.workshop.Multimodal
-
Extracting structured data from unstructured text
./gradlew run -q -DjavaMainClass=gemini.workshop.ExtractData
-
./gradlew run -q -DjavaMainClass=gemini.workshop.TemplatePrompt
-
Text classification & sentiment analysis
./gradlew run -q -DjavaMainClass=gemini.workshop.TextClassification
-
Retrieval Augmented Generation
./gradlew run -q -DjavaMainClass=gemini.workshop.RAG
-
./gradlew run -q -DjavaMainClass=gemini.workshop.FunctionCalling
-
./gradlew run -q -DjavaMainClass=gemini.workshop.FunctionCallingAssistant
-
Multi function calling assistant
./gradlew run -q -DjavaMainClass=gemini.workshop.MultiFunctionCallingAssistant
-
Running Gemma with Ollama TestContainer
./gradlew run -q -DjavaMainClass=gemini.workshop.GemmaWithOllamaContainer
This is not an official Google product.