Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tr/ollama #1434

Merged
merged 8 commits into from
Jan 2, 2025
Merged

Tr/ollama #1434

merged 8 commits into from
Jan 2, 2025

Conversation

mrT23
Copy link
Collaborator

@mrT23 mrT23 commented Jan 2, 2025

PR Type

Enhancement, Documentation


Description

  • Added duplicate_prompt_examples configuration for structured output.

  • Enhanced prompt examples for PR review and code suggestions.

  • Updated documentation for Ollama and Hugging Face usage.

  • Improved configuration handling in pr_agent/config_loader.py.


Changes walkthrough 📝

Relevant files
Enhancement
8 files
config_loader.py
Added `duplicate_prompt_examples` to configuration loading.
+1/-1     
pr_code_suggestions.py
Integrated `duplicate_prompt_examples` into code suggestions logic.
+3/-1     
pr_description.py
Integrated `duplicate_prompt_examples` into PR description logic.
+2/-1     
pr_reviewer.py
Integrated `duplicate_prompt_examples` into PR reviewer logic.
+1/-0     
pr_code_suggestions_prompts.toml
Enhanced prompt examples for code suggestions.                     
+24/-0   
pr_code_suggestions_reflect_prompts.toml
Enhanced prompt examples for code suggestions reflection.
+19/-0   
pr_description_prompts.toml
Enhanced prompt examples for PR descriptions.                       
+31/-0   
pr_reviewer_prompts.toml
Enhanced prompt examples for PR reviews.                                 
+53/-0   
Documentation
1 files
changing_a_model.md
Updated documentation for Ollama and Hugging Face models.
+18/-32 
Configuration changes
1 files
configuration.toml
Added `duplicate_prompt_examples` configuration option.   
+1/-0     

💡 PR-Agent usage: Comment /help "your question" on any pull request to receive relevant information

Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🏅 Score: 92
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Configuration Order

The .secrets.toml file is loaded last in the settings files list, which could cause configuration values to be overridden unexpectedly

"settings/.secrets.toml",

Copy link
Contributor

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Score
General
Include specific version information for model dependencies to ensure reproducibility

Add specific version information for the recommended Qwen model to ensure users
select a compatible version.

docs/docs/usage-guide/changing_a_model.md [40-41]

-model = "ollama/qwen2.5-coder:32b"
-fallback_models=["ollama/qwen2.5-coder:32b"]
+model = "ollama/qwen2.5-coder:32b-v1.0" # specify exact version
+fallback_models=["ollama/qwen2.5-coder:32b-v1.0"]
  • Apply this suggestion
Suggestion importance[1-10]: 3

Why: While version pinning is generally good practice, the suggestion provides moderate value since the current model reference may be intentionally generic to allow for version flexibility.

3
  • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

@mrT23
Copy link
Collaborator Author

mrT23 commented Jan 2, 2025

/describe

Copy link
Contributor

PR Description updated to latest commit (7f950a3)

@mrT23 mrT23 requested a review from ofir-frd January 2, 2025 10:53
@ofir-frd
Copy link
Collaborator

ofir-frd commented Jan 2, 2025

/improve
--pr_code_suggestions.commitable_code_suggestions=true

Comment on lines +39 to +42
[config]
model = "ollama/qwen2.5-coder:32b"
fallback_models=["ollama/qwen2.5-coder:32b"]
custom_model_max_tokens=128000 # set the maximal input tokens for the model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Add specific version information for the Qwen model to ensure users select a compatible version, and include installation instructions for Ollama. [general, importance: 4]

Suggested change
[config]
model = "ollama/qwen2.5-coder:32b"
fallback_models=["ollama/qwen2.5-coder:32b"]
custom_model_max_tokens=128000 # set the maximal input tokens for the model
[config]
# Ensure you have Ollama installed and the Qwen model pulled:
# ollama pull qwen2.5-coder:32b
model = "ollama/qwen2.5-coder:32b" # Requires Qwen version 2.5 or later
fallback_models=["ollama/qwen2.5-coder:32b"]

Copy link
Collaborator

@ofir-frd ofir-frd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good. I did not test the tools locally.

@mrT23 mrT23 merged commit f6b8017 into main Jan 2, 2025
2 checks passed
@mrT23 mrT23 deleted the tr/ollama branch January 2, 2025 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants