-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/traces region #477
base: main
Are you sure you want to change the base?
Conversation
- Introduced `__get_nexus_s3_client` method to create an S3 client for the nexus bucket. - Updated `upload_traces` and `get_trace_file` methods to use the new nexus S3 client and bucket settings. - Added nexus bucket and region names to the BedrockFileDatabase class for improved configuration management.
- Updated `upload_traces` and `get_trace_file` methods to utilize the nexus S3 client and bucket. - Adjusted exception handling to reflect changes in the S3 client usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @AlisoSouza, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request introduces changes to the bedrock.py
file within the nexus/task_managers/file_database
directory. The primary goal is to enable the uploading and retrieval of trace files from a separate S3 bucket (referred to as the 'nexus' bucket) in addition to the existing Bedrock bucket. This involves adding new configuration settings, initializing a new S3 client specifically for the nexus bucket, and updating the upload_traces
and get_trace_file
methods to use the new client and bucket.
Highlights
- New S3 Client: A new S3 client (
nexus_s3_client
) is initialized to interact with the nexus S3 bucket, separate from the Bedrock S3 bucket. - Configuration Settings: New configuration settings (
AWS_S3_BUCKET_NAME
andAWS_S3_REGION_NAME
) are introduced to specify the nexus S3 bucket and region. - Trace File Handling: The
upload_traces
andget_trace_file
methods are updated to use thenexus_s3_client
andnexus_bucket_name
for uploading and retrieving trace files.
Changelog
- nexus/task_managers/file_database/bedrock.py
- Added
nexus_bucket_name
andnexus_region_name
attributes to the Bedrock class, initialized withAWS_S3_BUCKET_NAME
andAWS_S3_REGION_NAME
settings (lines 53-54). - Initialized
nexus_s3_client
using__get_nexus_s3_client
method (line 64). - Created
__get_nexus_s3_client
method to create an S3 client using thenexus_region_name
(lines 830-834). - Modified
upload_traces
to usenexus_s3_client
andnexus_bucket_name
for uploading trace files (line 1024). - Modified
get_trace_file
to usenexus_s3_client
andnexus_bucket_name
for retrieving trace files (lines 1028-1030). - Updated exception handling in
get_trace_file
to catchnexus_s3_client.exceptions.NoSuchKey
(line 1030).
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Trivia time!
What does the acronym AWS stand for in the context of cloud computing?
Click here for the answer
AWS stands for Amazon Web Services.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request introduces functionality to upload and retrieve trace files to/from an S3 bucket, specifically for Bedrock agents. It involves adding new S3 client configurations and methods for uploading and retrieving data. The changes seem reasonable, but there are a few areas that could be improved for clarity and maintainability.
Summary of Findings
- Duplicated S3 Client Logic: The code introduces a new S3 client (
nexus_s3_client
) that mirrors the existing one (s3_client
). Consider refactoring to avoid duplication and improve maintainability. - Missing Error Handling: The
upload_traces
function lacks explicit error handling. Adding error logging or raising exceptions would improve the robustness of the code. - Inconsistent Naming: The naming convention for the new S3 client and related variables could be more consistent with existing conventions.
Merge Readiness
The pull request introduces important functionality, but there are some areas that need improvement before merging. Specifically, the duplicated S3 client logic and missing error handling should be addressed. I am unable to approve this pull request, and recommend that it not be merged until the high severity issues are addressed. Please have others review and approve this code before merging.
def __get_nexus_s3_client(self): | ||
return boto3.client( | ||
"s3", | ||
region_name=self.nexus_region_name | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function duplicates the logic in __get_s3_client
. Consider refactoring to avoid duplication. You could potentially pass the region name as an argument to a single __get_s3_client
function.
def __get_s3_client(self, region_name):
return boto3.client(
"s3",
region_name=region_name
)
def __get_nexus_s3_client(self):
return self.__get_s3_client(self.nexus_region_name)
def upload_traces(self, data, key): | ||
|
||
bytes_stream = BytesIO(data.encode('utf-8')) | ||
self.s3_client.upload_fileobj(bytes_stream, self.bucket_name, key) | ||
self.nexus_s3_client.upload_fileobj(bytes_stream, self.nexus_bucket_name, key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error handling to this function. If upload_fileobj
fails, it would be helpful to log the error or raise an exception.
try:
bytes_stream = BytesIO(data.encode('utf-8'))
self.nexus_s3_client.upload_fileobj(bytes_stream, self.nexus_bucket_name, key)
except Exception as e:
print(f"Error uploading trace file: {e}")
raise
self.nexus_bucket_name = settings.AWS_S3_BUCKET_NAME | ||
self.nexus_region_name = settings.AWS_S3_REGION_NAME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a comment explaining why you need a separate nexus_bucket_name
and nexus_region_name
. Is it pointing to a different bucket or region than the original bedrock configurations?
self.nexus_bucket_name = settings.AWS_S3_BUCKET_NAME | |
self.nexus_region_name = settings.AWS_S3_REGION_NAME | |
# Nexus S3 configurations for storing traces | |
self.nexus_bucket_name = settings.AWS_S3_BUCKET_NAME | |
self.nexus_region_name = settings.AWS_S3_REGION_NAME |
@@ -58,6 +61,7 @@ | |||
self.bedrock_runtime = self.__get_bedrock_runtime() | |||
self.lambda_client = self.__get_lambda_client() | |||
self.s3_client = self.__get_s3_client() | |||
self.nexus_s3_client = self.__get_nexus_s3_client() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like you are creating a new s3 client for nexus. Is there a specific reason for creating a new client instead of using the existing self.s3_client
? If it's for separation of concerns, consider documenting this decision.
self.nexus_s3_client = self.__get_nexus_s3_client() | |
# Nexus S3 client for storing traces | |
self.nexus_s3_client = self.__get_nexus_s3_client() |
response = self.nexus_s3_client.get_object(Bucket=self.nexus_bucket_name, Key=key) | ||
return response['Body'].read().decode('utf-8') | ||
except self.s3_client.exceptions.NoSuchKey: | ||
except self.nexus_s3_client.exceptions.NoSuchKey: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The return value []
in the except
block might be misleading. It suggests an empty list of traces, but it actually indicates that the trace file was not found. Consider returning None
or raising an exception to better reflect the situation.
except self.nexus_s3_client.exceptions.NoSuchKey:
return None # Or raise an exception
No description provided.