The CodeReview Agent is an AI-powered GitHub assistant that automates code review, repository analysis, and developer productivity tasks. It leverages multiple tools to fetch repository data, analyze code, review pull requests, and create actionable GitHub issues.
Key Features:
- Automated code review and issue creation for code smells, TODOs, and best practices.
- Fetches and summarizes user and repository statistics.
- Analyzes contribution activity over time.
- Supports pull request review, commenting, and creation.
- Can fork and create repositories programmatically.
- getGitHubUserTool: Fetch detailed GitHub user profile information.
- getUserContributionActivityTool: Summarize a user's contribution activity (commits, PRs, issues, repos, forks) for a given period (7d, 14d, 30d).
- getRepositoryCommits: List commits for a repository.
- createRepository: Create a new repository for the authenticated user.
- forkRepository: Fork a repository to the authenticated user's account.
- getFileContent: Fetch and decode file content from a repository.
- getFilePaths: List all file paths in a repository (by tree SHA or branch).
- getRepositoryPullRequests: List pull requests for a repository.
- summarizePullRequests: Summarize pull requests (counts and summary string).
- reviewPullRequest: Submit a review for a pull request (approve, request changes, or comment).
- createPullRequest: Create a new pull request in a repository.
- commentOnPullRequest: Add a comment to a pull request.
- getRepositoryIssues: List issues for a repository (excluding pull requests).
- createIssue: Create a new issue in a repository.
- commentOnIssue: Add a comment to an issue.
- summarizeIssues: Summarize issues (counts and summary string).
- ghProfile: Fetch a user's GitHub profile and return structured profile data.
- createAddressIssueFromReview: Fetch files from a repository, analyze for code smells (e.g., TODO, FIXME,
any
,console.log
), and automatically create GitHub issues with suggested fixes. - ContributionWorkflow: Summarize a user's contribution activity (commits, PRs, issues, repos, forks) for a specified period (7, 14, or 30 days).
The main agent, githubAgent
, is configured with all the above tools and workflows. It is registered in src/mastra/index.ts
as part of the Mastra application.
The agent uses the following environment variables (with defaults):
MODEL_NAME_AT_ENDPOINT
(default:qwen2.5:1.5b
)API_BASE_URL
(default:http://127.0.0.1:11434/api
)
These control the LLM model and API endpoint used for chat and code review.
A Dockerfile
is provided for easy containerized deployment. The image installs Node.js, dependencies, and Ollama for LLM inference.
docker build -t codereview-agent .
docker run -p 8080:8080 \
-e MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b \
-e API_BASE_URL=http://127.0.0.1:11434/api \
codereview-agent
- The container will start the Ollama service, pull the specified model, and launch the agent on port 8080.
- You can override the model or API endpoint by setting the environment variables above.
src/mastra/tools/
— All GitHub and file toolssrc/mastra/workflows/
— Workflows for code review, profile, and contribution analysissrc/mastra/agents/
— Agent entrypoint and registrationsrc/mastra/config.ts
— Model and API configurationDockerfile
— Container build and run instructions
You can run the CodeReview Agent locally or with Docker.
npm install -g pnpm
pnpm install
pnpm run build
pnpm run dev
Create a .env
file in the project root (or set these variables in your environment):
MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b
API_BASE_URL=http://127.0.0.1:11434/api
MODEL_NAME_AT_ENDPOINT
: The LLM model to use (default:qwen2.5:1.5b
).API_BASE_URL
: The base URL for the Ollama API (default:http://127.0.0.1:11434/api
).
You can change these to use a different model or endpoint as needed.
The CodeReview Agent relies on a Large Language Model (LLM) to perform its tasks, such as code analysis, summarization, and issue creation. The agent is designed to work with any LLM that supports the OpenAI API format, but it is optimized for the qwen2.5:1.5b
model provided by Ollama.
You can use the following endpoint and model for testing, if you wish:
MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b
API_BASE_URL= https://dashboard.nosana.com/jobs/GPVMUckqjKR6FwqnxDeDRqbn34BH7gAa5xWnWuNH1drf
The default configuration uses a local Ollama LLM.
For local development or if you prefer to use your own LLM, you can use Ollama to serve the lightweight qwen2.5:1.5b
mode.
Installation & Setup:
-
Start Ollama service:
ollama serve
- Pull and run the
qwen2.5:1.5b
model:
ollama pull qwen2.5:1.5b
ollama run qwen2.5:1.5b
- Update your
.env
file
There are two predefined environments defined in the .env
file. One for local development and another, with a larger model, qwen2.5:32b
, for more complex use cases.
Why qwen2.5:1.5b
?
- Lightweight (only ~1GB)
- Fast inference on CPU
- Supports tool calling
- Great for development and testing
Do note qwen2.5:1.5b
is not suited for complex tasks.
The Ollama server will run on http://localhost:11434
by default and is compatible with the OpenAI API format that Mastra expects.
A Dockerfile
is provided for containerized deployment.
docker build -t codereview-agent .
docker run -p 8080:8080 \
-e MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b \
-e API_BASE_URL=http://127.0.0.1:11434/api \
codereview-agent
- The container will start the Ollama service, pull the specified model, and launch the agent on port 8080.
- You can override the model or API endpoint by setting the environment variables above.
- Deploy your Docker container on Nosana
- Your agent must successfully run on the Nosana network
- Include the Nosana job ID or deployment link
We have included a Nosana job definition at <./nos_job_def/nosana_mastra.json>, that you can use to publish your agent to the Nosana network.
A. Deploying using @nosana/cli
- Edit the file and add in your published docker image to the
image
property."image": "docker.io/yourusername/agent-challenge:latest"
- Download and install the @nosana/cli
- Load your wallet with some funds
- Retrieve your address with:
nosana address
- Go to our Discord and ask for some NOS and SOL to publish your job.
- Retrieve your address with:
- Run:
nosana job post --file nosana_mastra.json --market nvidia-3060 --timeout 30
- Go to the Nosana Dashboard to see your job
B. Deploying using the Nosana Dashboard
- Make sure you have https://phantom.com/, installed for your browser.
- Go to our Discord and ask for some NOS and SOL to publish your job.
- Click the
Expand
button, on the Nosana Dashboard - Copy and Paste your edited Nosana Job Definition file into the Textarea
- Choose an appropriate GPU for the AI model that you are using
- Click
Deploy
This agent is deployed on the Nosana Builders Challenge platform, which provides a serverless environment for running AI agents. For more information, visit the Nosana Builders Challenge website.