Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip errors when reading a file fails #231

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 7 additions & 4 deletions seagoat/engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,10 +103,13 @@ def _create_vector_embeddings(self, minimum_chunks_to_analyze=None):
chunks_to_process = []

for file, _ in self.repository.top_files():
for chunk in file.get_chunks():
if chunk.chunk_id not in self.cache.data["chunks_already_analyzed"]:
chunks_to_process.append(chunk)
self.cache.data["chunks_not_yet_analyzed"].add(chunk.chunk_id)
try:
for chunk in file.get_chunks():
if chunk.chunk_id not in self.cache.data["chunks_already_analyzed"]:
chunks_to_process.append(chunk)
self.cache.data["chunks_not_yet_analyzed"].add(chunk.chunk_id)
except Exception as e:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a bit generic, I think it might be a bad idea to skip any error

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's probably very annoying to crash the server for a repo with hundreds or thousands of files just because one or two files cannot be read, however most errors might be errors that apply to all files, or majority of files, in which case probably the best way is to crash the server and allow the user to create an issue on github to fix it

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how to design it well, I am now thinking that maybe there could be a counter, and it skips the first 5 errors or so, but crashes on the 5th?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we know the total number of files, maybe it's better to use a relative cut-off e.g. abort when more than 1% of all files fail. Or alternatively abort when more than x% percent of files processed so far are errorneous. This should nicely catch the case where something is fundamentally wrong and all files are failing.
In larger repos the probability that there are no "weird" files tends to be very small ;) It would be good if the server would be somewhat robust with regard to file-ingestion.

The pre-commit check complains about print(): Shall we just use logging.error() in the server or do you have something else in mind for log messages?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, making it a % makes sense to me!

The pre-commit check complains about print(): Shall we just use logging.error() in the server or do you have something else in mind for log messages?

Yeah, I think it would make sense to use logging.error()

print(f"Failed to read file {file.path} => Skipping it ({e})")

if minimum_chunks_to_analyze is None:
minimum_chunks_to_analyze = min(
Expand Down