Skip to content

fix: replaced vertex models with litellm models in workflow#1258

Open
Himanshvarma wants to merge 2 commits intomainfrom
fix/workFlowModlels
Open

fix: replaced vertex models with litellm models in workflow#1258
Himanshvarma wants to merge 2 commits intomainfrom
fix/workFlowModlels

Conversation

@Himanshvarma
Copy link
Contributor

@Himanshvarma Himanshvarma commented Feb 26, 2026

Description

Testing

Additional Notes

Summary by CodeRabbit

  • New Features

    • Added a new API endpoint to dynamically retrieve available AI models (includes a recommended default).
  • Changes

    • Generalized model support to fetch all provider models instead of Gemini-only.
    • UI now uses the API-provided default model for initial selection rather than a hardcoded fallback.
    • Default model placeholder updated to "default-model" and fetch/error messages made more generic.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Himanshvarma, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the flexibility of AI model selection within the application's workflow. By transitioning from hardcoded Vertex AI/Gemini model references to a dynamic, provider-agnostic model fetching system, it allows the frontend to seamlessly integrate and display models from various AI providers, particularly LiteLLM's hosted vLLM. This change improves the system's adaptability to different AI backends and simplifies future model additions.

Highlights

  • Unified Model Fetching: Replaced specific API calls for Gemini models with a new generic endpoint /workflow/models to fetch AI models dynamically based on the active provider.
  • LiteLLM Integration: Implemented logic to fetch and filter models from LiteLLM's hosted vLLM provider when LiteLLM is the active AI provider, combining API data with static configurations.
  • Default Model Update: Changed the default model from vertex-gemini-2-5-flash to open-large across various workflow configurations in the frontend.
  • Frontend Adaptability: Updated frontend components (AIAgentConfigUI, QAAgentConfigUI, WorkflowBuilder) to use the new unified model fetching mechanism and reflect the updated default model.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • frontend/src/components/workflow/AIAgentConfigUI.tsx
    • Updated the initial default model to 'open-large'.
    • Refactored the model fetching logic to call the new generic /workflow/models API endpoint.
    • Adjusted the fallback model in getValidModelId to 'open-large'.
  • frontend/src/components/workflow/QAAgentConfigUI.tsx
    • Updated the initial default model to 'open-large'.
    • Refactored the model fetching logic to call the new generic /workflow/models API endpoint.
    • Adjusted the fallback model in getValidModelId to 'open-large'.
  • frontend/src/components/workflow/WorkflowBuilder.tsx
    • Updated the default model name displayed in the Q&A agent description to 'open-large'.
  • server/ai/fetchModels.ts
    • Exported the fetchModelInfoFromAPI function to allow its use in other modules.
  • server/api/workflow.ts
    • Added a new API endpoint GetModelEnumsApi at /workflow/models.
    • Implemented logic within GetModelEnumsApi to fetch models from LiteLLM's hosted vLLM provider or use static configurations based on the active AI provider.
    • Included imports for getActiveProvider, fetchModelInfoFromAPI, MODEL_CONFIGURATIONS, and AIProviders to support dynamic model fetching.
  • server/server.ts
    • Imported GetModelEnumsApi.
    • Registered the new /workflow/models GET endpoint to the application routes.
Activity
  • No specific activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 26, 2026

📝 Walkthrough

Walkthrough

Generalized model fetching: frontend agent configs now request provider-agnostic model enums from a new backend endpoint; defaults moved from the Gemini-specific identifier to a generic default-model. Backend exposes a unified GetModelEnumsApi that returns model enums and a defaultModel based on the active provider.

Changes

Cohort / File(s) Summary
Frontend Agent Model Configuration
frontend/src/components/workflow/AIAgentConfigUI.tsx, frontend/src/components/workflow/QAAgentConfigUI.tsx
Removed Gemini hardcoded defaults; initial model state now "default-model" and models initialized empty. Added defaultModel state. Renamed fetchGeminiModelsfetchModels and updated API call to api.workflow.models.$get(); removed Gemini-only filtering and adjusted getValidModelId to prefer provided modelId, then defaultModel, then first model or empty.
Frontend UI Display
frontend/src/components/workflow/WorkflowBuilder.tsx
Updated Q&A Agent node description to reference "default-model" instead of the previous Gemini fallback string.
Backend Model Discovery Export
server/ai/fetchModels.ts
Made fetchModelInfoFromAPI publicly exported (added export keyword) without changing behavior.
Backend Workflow API
server/api/workflow.ts
Added GetModelEnumsApi endpoint that returns model enums and defaultModel depending on active provider. LiteLLM path fetches external models via fetchModelInfoFromAPI, filters/deduplicates, derives metadata and sorts; non-LiteLLM path maps static MODEL_CONFIGURATIONS. Added/adjusted related logging and return shape; GetGeminiModelEnumsApi now delegates to VertexAI handler. Updated Q&A metadata to use config.defaultBestModel.
Backend Server Routing
server/server.ts
Registered new route GET /workflow/modelsGetModelEnumsApi in AppRoutes.

Sequence Diagram

sequenceDiagram
    participant Client as Frontend Client
    participant API as Server Route GET /workflow/models
    participant Workflow as GetModelEnumsApi
    participant Provider as Active Provider Check
    participant LiteLLM as External LiteLLM API
    participant Config as MODEL_CONFIGURATIONS

    Client->>API: Request /workflow/models
    API->>Workflow: Invoke GetModelEnumsApi
    Workflow->>Provider: Determine active provider

    alt LiteLLM active
        Workflow->>LiteLLM: fetchModelInfoFromAPI()
        LiteLLM-->>Workflow: model list + defaultModel
        Workflow->>Workflow: filter/deduplicate, map metadata, sort
    else other provider
        Workflow->>Config: read MODEL_CONFIGURATIONS for provider
        Config-->>Workflow: provider model list
        Workflow->>Workflow: map to enum format, derive metadata, sort
    end

    Workflow-->>API: { data: enums, defaultModel, count }
    API-->>Client: Return model enums + defaultModel
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • zereraz
  • kalpadhwaryu
  • junaid-shirur
  • devesh-juspay

Poem

🐰
I hopped from Gemini's nest to roam,
Found default-model as a brand new home,
One endpoint now gathers each model's name,
No more hardcodes, the rabbit's game—
Hooray for tidy, provider-agnostic fame! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing Vertex models with LiteLLM models throughout the workflow system, which is reflected in the generalization of model fetching, API endpoints, and removal of Gemini-specific filtering across multiple files.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/workFlowModlels

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
server/ai/fetchModels.ts (1)

33-39: Avoid returning mutable cache references from an exported function.

Now that fetchModelInfoFromAPI is exported, returning modelInfoCache.data directly lets external callers accidentally mutate shared cache state.

♻️ Proposed hardening
 export async function fetchModelInfoFromAPI(forceRefresh = false): Promise<any[]> {
@@
     if (age < CACHE_TTL_MS) {
-      return modelInfoCache.data
+      return modelInfoCache.data.map((m) => ({ ...m }))
     }
@@
     if (modelInfoCache) {
       Logger.info("Using stale cached model info")
-      return modelInfoCache.data
+      return modelInfoCache.data.map((m) => ({ ...m }))
     }

Also applies to: 94-97

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/ai/fetchModels.ts` around lines 33 - 39, fetchModelInfoFromAPI
currently returns the cached array reference (modelInfoCache.data) which allows
external code to mutate shared cache; update fetchModelInfoFromAPI to return a
copy of the cached data (e.g., use a shallow/deep clone/structuredClone or array
mapping) whenever you return modelInfoCache.data so callers get an immutable
copy. Apply the same change to the other return site that returns
modelInfoCache.data (the block around lines 94-97) so no exported function
returns a direct reference to the internal cache.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@frontend/src/components/workflow/AIAgentConfigUI.tsx`:
- Around line 112-117: The current logic in the data fetch block (checking
data.success and Array.isArray(data.data>) calls setModels(enumValues) even when
enumValues is an empty array, which can clear valid defaults; update the
conditional in the handler that maps and sets models so that you only call
setModels(enumValues) and setModelsLoaded(true) when enumValues.length > 0,
otherwise leave existing models unchanged (and optionally still
setModelsLoaded(true) or set an error/loaded flag); locate this logic around the
mapping and state updates using setModels, setModelsLoaded, data.success and
data.data to implement the guard.

In `@frontend/src/components/workflow/QAAgentConfigUI.tsx`:
- Around line 103-109: The current code always calls setModels(enumValues) even
when enumValues is empty, which clears the dropdown; change the logic in
QAAgentConfigUI.tsx so that after computing enumValues you only call
setModels(enumValues) and setModelsLoaded(true) when enumValues.length > 0
(i.e., if enumValues is non-empty); if empty, do not call setModels (leave
existing models intact) and do not flip modelsLoaded so the UI doesn't replace
valid options with an empty list.

In `@server/api/workflow.ts`:
- Around line 5765-5769: The catch block in the get model enums flow is
converting every error into HTTPException(500), which masks existing
HTTPException(400) client errors; update the catch in the function/method around
Logger.error(..., "Failed to get model enums") to detect if error is an instance
of HTTPException and rethrow it unchanged, otherwise log and throw a new
HTTPException(500) with getErrorMessage(error); reference the catch block and
HTTPException usage so you only change the error handling logic there.

---

Nitpick comments:
In `@server/ai/fetchModels.ts`:
- Around line 33-39: fetchModelInfoFromAPI currently returns the cached array
reference (modelInfoCache.data) which allows external code to mutate shared
cache; update fetchModelInfoFromAPI to return a copy of the cached data (e.g.,
use a shallow/deep clone/structuredClone or array mapping) whenever you return
modelInfoCache.data so callers get an immutable copy. Apply the same change to
the other return site that returns modelInfoCache.data (the block around lines
94-97) so no exported function returns a direct reference to the internal cache.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71531e4 and 797dc46.

📒 Files selected for processing (6)
  • frontend/src/components/workflow/AIAgentConfigUI.tsx
  • frontend/src/components/workflow/QAAgentConfigUI.tsx
  • frontend/src/components/workflow/WorkflowBuilder.tsx
  • server/ai/fetchModels.ts
  • server/api/workflow.ts
  • server/server.ts

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the model fetching logic to be more generic, replacing hardcoded Vertex AI models with a dynamic approach that supports LiteLLM and other providers. A new API endpoint /workflow/models is introduced for this purpose. The changes look good overall. I've added a few comments suggesting refactoring to reduce code duplication and improve type safety in both the frontend and backend. Specifically, there's an opportunity to extract duplicated logic in server/api/workflow.ts into helper functions, and duplicated model fetching logic in the frontend components into a custom hook.

name: "AI Agent",
description: "some agent description",
model: "vertex-gemini-2-5-flash",
model: "open-large",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Himanshvarma take this value from the env as we should be call private-large or open-large-sa model in litellm

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
server/api/workflow.ts (1)

5639-5642: ⚠️ Potential issue | 🟠 Major

Preserve explicit HTTP exceptions in GetModelEnumsApi catch block.

Line 5640 throws HTTPException(400), but Line 5772 remaps all errors to 500. This masks client/config errors as server failures.

#!/bin/bash
# Verify catches that always remap to HTTP 500 and whether they preserve HTTPException.
rg -nP 'GetModelEnumsApi|catch\s*\(error\)\s*\{(?:.|\n){0,220}throw new HTTPException\(500' 
rg -n 'instanceof HTTPException'
💡 Proposed fix
   } catch (error) {
     Logger.error(error, "Failed to get model enums")
+    if (error instanceof HTTPException) {
+      throw error
+    }
     throw new HTTPException(500, {
       message: getErrorMessage(error),
     })
   }

Also applies to: 5770-5774

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/workflow.ts` around lines 5639 - 5642, The catch in
GetModelEnumsApi is currently remapping all errors to HTTPException(500), which
hides intentional client errors like the thrown HTTPException(400); update the
catch in GetModelEnumsApi to detect if error is an instance of HTTPException and
rethrow it unchanged, otherwise wrap/throw a new HTTPException(500) for
unexpected errors—locate the GetModelEnumsApi function and its catch block
(around the code that throws HTTPException(400)) and modify the error handling
to preserve existing HTTPException instances instead of always converting to
500.
frontend/src/components/workflow/QAAgentConfigUI.tsx (1)

112-121: ⚠️ Potential issue | 🟠 Major

Guard model state updates when API returns empty/invalid enums.

Line 115 currently overwrites local model options even when the payload has no valid enum values, which can leave an empty dropdown and invalid saves.

🛠️ Suggested fix
-              const enumValues = data.data
-                // .filter((model: any) => model.modelType==="gemini")
-                .map((model: any) => model.enumValue)
-              setModels(enumValues)
-              // Set default model from API response
-              if (data.defaultModel) {
-                setDefaultModel(data.defaultModel)
-              }
-              setModelsLoaded(true)
+              const enumValues = [...new Set(
+                data.data
+                  .map((model: any) => model?.enumValue)
+                  .filter((value: unknown): value is string => typeof value === "string" && value.trim().length > 0)
+              )]
+              if (enumValues.length > 0) {
+                const resolvedDefault =
+                  data.defaultModel && enumValues.includes(data.defaultModel)
+                    ? data.defaultModel
+                    : enumValues[0]
+                setModels(enumValues)
+                setDefaultModel(resolvedDefault)
+                setAgentConfig((prev) => ({
+                  ...prev,
+                  model: enumValues.includes(prev.model) ? prev.model : resolvedDefault,
+                }))
+                setModelsLoaded(true)
+              } else {
+                console.warn("Models API returned no enums; preserving existing model options")
+              }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/workflow/QAAgentConfigUI.tsx` around lines 112 - 121,
The code unconditionally calls setModels with enumValues derived from data.data
even when enumValues is empty/invalid; change the logic in the block that
computes enumValues so you first validate that data.data is an array and
enumValues contains at least one non-empty value before calling setModels, only
call setDefaultModel when data.defaultModel exists and is included in the
validated enumValues, and only then call setModelsLoaded(true); if validation
fails, leave existing model state intact (do not overwrite) and still
setModelsLoaded appropriately (or set an error/empty flag) to avoid leaving the
dropdown empty or saving invalid selections—refer to symbols enumValues,
data.data, setModels, setDefaultModel, and setModelsLoaded to locate and update
the code.
frontend/src/components/workflow/AIAgentConfigUI.tsx (1)

114-123: ⚠️ Potential issue | 🟠 Major

Avoid clearing model options on empty/invalid enum payloads.

Line 117 sets models unconditionally, so an empty mapped payload can blank the dropdown and leave invalid model state.

🛠️ Suggested fix
-              const enumValues = data.data
-                // .filter((model: any) => model.modelType==="gemini")
-                .map((model: any) => model.enumValue)
-              setModels(enumValues)
-              // Set default model from API response
-              if (data.defaultModel) {
-                setDefaultModel(data.defaultModel)
-              }
-              setModelsLoaded(true)
+              const enumValues = [...new Set(
+                data.data
+                  .map((model: any) => model?.enumValue)
+                  .filter((value: unknown): value is string => typeof value === "string" && value.trim().length > 0)
+              )]
+              if (enumValues.length > 0) {
+                const resolvedDefault =
+                  data.defaultModel && enumValues.includes(data.defaultModel)
+                    ? data.defaultModel
+                    : enumValues[0]
+                setModels(enumValues)
+                setDefaultModel(resolvedDefault)
+                setAgentConfig((prev) => ({
+                  ...prev,
+                  model: enumValues.includes(prev.model) ? prev.model : resolvedDefault,
+                }))
+                setModelsLoaded(true)
+              } else {
+                console.warn("Models API returned no enums; preserving existing model options")
+              }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/workflow/AIAgentConfigUI.tsx` around lines 114 - 123,
The code unconditionally calls setModels(enumValues) which can clear the
dropdown when the API returns an empty/invalid enum payload; change the logic in
the same block that builds enumValues so you only call setModels(enumValues)
when enumValues && enumValues.length > 0, otherwise leave the existing models
untouched (do not overwrite with an empty array); also when setting
setDefaultModel(data.defaultModel) ensure the provided default exists in the
current models before calling setDefaultModel, and still call
setModelsLoaded(true) only after preserving a valid models list or when a valid
default is confirmed; adjust use of setModels, setDefaultModel, setModelsLoaded
and the enumValues/data.defaultModel checks accordingly.
🧹 Nitpick comments (1)
server/api/workflow.ts (1)

5725-5725: Use structured logger instead of console.log in API handlers.

Direct console logging here is inconsistent with the rest of this module and makes log control/noise management harder in production.

Also applies to: 5761-5761

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/workflow.ts` at line 5725, Replace the direct console.log calls
that print modelEnums counts with the module's structured logger (e.g.,
processLogger.info or logger.info) and emit the data as structured fields
instead of interpolated strings; specifically change the console.log(`Model
enums for LiteLLM provider from API:`, modelEnums.length, "models") and the
similar occurrence around line 5761 to use processLogger.info({ event:
"lite_llm_model_enums", provider: "LiteLLM", modelCount: modelEnums.length })
(or the project logger instance used elsewhere) so logs are consistent,
searchable, and respect log-level/configuration.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/api/workflow.ts`:
- Around line 5670-5709: modelInfo.model_name can be non-string and using it
directly for seenModelIds, actualName and modelId.includes(...) may throw;
validate and coerce before string operations. Ensure the loop first obtains a
safe modelId (e.g., skip entries where modelInfo or modelInfo.model_name is
null/undefined or coerce with String(modelInfo.model_name)), use that safe
modelId for seenModelIds, actualName resolution (guard litellm_params?.model
similarly) and when computing modelType (only call includes on a string), and
skip or log malformed entries instead of processing them; update references in
this block (modelInfo, modelId, actualName, modelEnums, MODEL_CONFIGURATIONS)
accordingly.

---

Duplicate comments:
In `@frontend/src/components/workflow/AIAgentConfigUI.tsx`:
- Around line 114-123: The code unconditionally calls setModels(enumValues)
which can clear the dropdown when the API returns an empty/invalid enum payload;
change the logic in the same block that builds enumValues so you only call
setModels(enumValues) when enumValues && enumValues.length > 0, otherwise leave
the existing models untouched (do not overwrite with an empty array); also when
setting setDefaultModel(data.defaultModel) ensure the provided default exists in
the current models before calling setDefaultModel, and still call
setModelsLoaded(true) only after preserving a valid models list or when a valid
default is confirmed; adjust use of setModels, setDefaultModel, setModelsLoaded
and the enumValues/data.defaultModel checks accordingly.

In `@frontend/src/components/workflow/QAAgentConfigUI.tsx`:
- Around line 112-121: The code unconditionally calls setModels with enumValues
derived from data.data even when enumValues is empty/invalid; change the logic
in the block that computes enumValues so you first validate that data.data is an
array and enumValues contains at least one non-empty value before calling
setModels, only call setDefaultModel when data.defaultModel exists and is
included in the validated enumValues, and only then call setModelsLoaded(true);
if validation fails, leave existing model state intact (do not overwrite) and
still setModelsLoaded appropriately (or set an error/empty flag) to avoid
leaving the dropdown empty or saving invalid selections—refer to symbols
enumValues, data.data, setModels, setDefaultModel, and setModelsLoaded to locate
and update the code.

In `@server/api/workflow.ts`:
- Around line 5639-5642: The catch in GetModelEnumsApi is currently remapping
all errors to HTTPException(500), which hides intentional client errors like the
thrown HTTPException(400); update the catch in GetModelEnumsApi to detect if
error is an instance of HTTPException and rethrow it unchanged, otherwise
wrap/throw a new HTTPException(500) for unexpected errors—locate the
GetModelEnumsApi function and its catch block (around the code that throws
HTTPException(400)) and modify the error handling to preserve existing
HTTPException instances instead of always converting to 500.

---

Nitpick comments:
In `@server/api/workflow.ts`:
- Line 5725: Replace the direct console.log calls that print modelEnums counts
with the module's structured logger (e.g., processLogger.info or logger.info)
and emit the data as structured fields instead of interpolated strings;
specifically change the console.log(`Model enums for LiteLLM provider from
API:`, modelEnums.length, "models") and the similar occurrence around line 5761
to use processLogger.info({ event: "lite_llm_model_enums", provider: "LiteLLM",
modelCount: modelEnums.length }) (or the project logger instance used elsewhere)
so logs are consistent, searchable, and respect log-level/configuration.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 797dc46 and 4d635fd.

📒 Files selected for processing (4)
  • frontend/src/components/workflow/AIAgentConfigUI.tsx
  • frontend/src/components/workflow/QAAgentConfigUI.tsx
  • frontend/src/components/workflow/WorkflowBuilder.tsx
  • server/api/workflow.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants