-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: studio statistics #14701
feat: studio statistics #14701
Conversation
📝 WalkthroughWalkthroughThis pull request integrates Confluent Kafka support into the backend. It adds new package references in the NuGet and project files, implements Kafka configuration via a new settings class, and introduces event records and handlers for deployment pipeline notifications. Repository methods are enhanced to retrieve deployment data while the polling job is updated to publish completion events. A Docker Compose file sets up Kafka, Zookeeper, Schema Registry, and Kafdrop services. Additionally, the dependency injection container is extended, a Kafka producer with its interface is implemented, and integration tests alongside a workflow update are provided. Changes
Suggested labels
Suggested reviewers
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
f089c07
to
ea3c37a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 5
🧹 Nitpick comments (6)
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (1)
83-86
: Clarify or unify environment properties.You store environment information both in
Environment
and inAppsEnvironment
. This may introduce confusion if different code paths rely on different properties. Consider consolidating them if possible or clarifying their different roles with doc comments.backend/src/Designer/Events/DeploymentPipelineCompleted.cs (1)
6-13
: Add contextual documentation for required properties.These properties are clearly defined and marked as required, which is great. However, consider adding XML doc comments or annotated comments to clarify each property’s purpose, especially for future maintainers or external consumers.
backend/src/Designer/Events/DeploymentPipelineQueued.cs (1)
6-12
: Consider using immutable properties on the record.Right now, properties use
set;
. If the intent is to keep this record truly immutable, switching toinit;
can help prevent unintended modifications while preserving the ability to initialize properties during object creation.-public required AltinnRepoEditingContext EditingContext { get; set; } -public required int BuildId { get; set; } -public required string Environment { get; set; } -public required PipelineType PipelineType { get; set; } +public required AltinnRepoEditingContext EditingContext { get; init; } +public required int BuildId { get; init; } +public required string Environment { get; init; } +public required PipelineType PipelineType { get; init; }backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsEvent.cs (1)
3-19
: Add XML documentation and consider using an enum for event names.The implementation looks good with immutable properties and static instances. A few suggestions for improvement:
- Add XML documentation for public members.
- Consider using an enum for event names to ensure type safety.
+/// <summary> +/// Represents a studio statistics event. +/// </summary> public class StudioStatisticsEvent { + /// <summary> + /// Gets the name of the event. + /// </summary> public string Name { get; private set; } + + /// <summary> + /// Gets the description of the event. + /// </summary> public string Description { get; private set; } + + /// <summary> + /// Initializes a new instance of the <see cref="StudioStatisticsEvent"/> class. + /// </summary> private StudioStatisticsEvent(string name, string description) { Name = name; Description = description; } + /// <summary> + /// Gets the event for successful app deployment. + /// </summary> public static StudioStatisticsEvent AppDeployed { get; } = new("AppDeployed", "App deployed"); + + /// <summary> + /// Gets the event for successful app update. + /// </summary> public static StudioStatisticsEvent AppUpdated { get; } = new("AppUpdated", "App updated"); + + /// <summary> + /// Gets the event for successful app decommissioning. + /// </summary> public static StudioStatisticsEvent AppDecommissioned { get; } = new("AppDecommissioned", "App Decommissioned"); + + /// <summary> + /// Gets the event for failed app deployment. + /// </summary> public static StudioStatisticsEvent AppDeployFailed { get; } = new("AppDeployFailed", "App deploy failed"); + + /// <summary> + /// Gets the event for failed app update. + /// </summary> public static StudioStatisticsEvent AppUpdateFailed { get; } = new("AppUpdateFailed", "App update failed"); + + /// <summary> + /// Gets the event for failed app decommissioning. + /// </summary> public static StudioStatisticsEvent AppDecommissionFailed { get; } = new("AppDecommissionFailed", "App decommission failed"); }Alternative implementation using an enum for event names:
public enum StudioStatisticsEventType { AppDeployed, AppUpdated, AppDecommissioned, AppDeployFailed, AppUpdateFailed, AppDecommissionFailed } public class StudioStatisticsEvent { public StudioStatisticsEventType Type { get; private set; } public string Description { get; private set; } private StudioStatisticsEvent(StudioStatisticsEventType type, string description) { Type = type; Description = description; } public static StudioStatisticsEvent AppDeployed { get; } = new(StudioStatisticsEventType.AppDeployed, "App deployed"); // ... other static properties }backend/src/Designer/Repository/IDeploymentRepository.cs (1)
31-40
: Complete XML documentation for new methods.Add parameter descriptions to the XML documentation for better IntelliSense support and code maintainability.
/// <summary> /// Gets the last deployed entity on environment /// </summary> + /// <param name="org">The organization identifier.</param> + /// <param name="app">The application identifier.</param> + /// <param name="environment">The target environment.</param> + /// <returns>The last deployed entity.</returns> Task<DeploymentEntity> GetLastDeployed(string org, string app, string environment); /// <summary> /// Get all deployments for an app in an environment /// </summary> + /// <param name="org">The organization identifier.</param> + /// <param name="app">The application identifier.</param> + /// <param name="environment">The target environment.</param> + /// <param name="query">The query model for filtering results.</param> + /// <returns>A collection of successful deployment entities.</returns> Task<IEnumerable<DeploymentEntity>> GetSucceeded(string org, string app, string environment, DocumentQueryModel query);backend/src/Designer/Designer.csproj (1)
26-28
: Review Kafka Package References Addition
The three new package references (Confluent.Kafka
,Confluent.SchemaRegistry
, andConfluent.SchemaRegistry.Serdes.Json
) have been added to support Kafka integration and schema management. Please ensure that their versions are controlled consistently across the solution (for example, via a centralized file likeNuGet.props
), so that all project dependencies remain in sync.
🛑 Comments failed to post (5)
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (1)
32-40: 🛠️ Refactor suggestion
Add error handling for Kafka produce failures.
Currently, the
ProduceAsync
call doesn’t handle possible exceptions or respond to produce failures (e.g., broker unavailability, authentication issues). Consider wrapping this in a try/catch block or using error events to log or handle any produce failures for improved system reliability.Would you like me to suggest an updated code snippet to illustrate proper exception handling and error logging?
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsModel.cs (1)
5-14: 🛠️ Refactor suggestion
Consider making properties immutable and adding validation.
A few suggestions to improve the model:
- Make properties immutable to prevent unintended state changes after initialization.
- Add validation for required fields.
- Add XML documentation for public members.
public record StudioStatisticsModel { - public string EventName { get; set; } - public string Environment { get; set; } - public string Description { get; set; } - public string Org { get; set; } - public string App { get; set; } - public string AppsEnvironment { get; set; } - public Dictionary<string, string> AdditionalData { get; set; } + /// <summary> + /// Gets the name of the event. + /// </summary> + public string EventName { get; init; } + + /// <summary> + /// Gets the environment where the event occurred. + /// </summary> + public string Environment { get; init; } + + /// <summary> + /// Gets the description of the event. + /// </summary> + public string Description { get; init; } + + /// <summary> + /// Gets the organization identifier. + /// </summary> + public string Org { get; init; } + + /// <summary> + /// Gets the application identifier. + /// </summary> + public string App { get; init; } + + /// <summary> + /// Gets the apps environment. + /// </summary> + public string AppsEnvironment { get; init; } + + /// <summary> + /// Gets additional data associated with the event. + /// </summary> + public Dictionary<string, string> AdditionalData { get; init; } + + /// <summary> + /// Initializes a new instance of the <see cref="StudioStatisticsModel"/> class. + /// </summary> + public StudioStatisticsModel(string eventName, string environment, string description, string org, string app, string appsEnvironment, Dictionary<string, string> additionalData) + { + EventName = !string.IsNullOrEmpty(eventName) ? eventName : throw new ArgumentNullException(nameof(eventName)); + Environment = !string.IsNullOrEmpty(environment) ? environment : throw new ArgumentNullException(nameof(environment)); + Description = description; + Org = !string.IsNullOrEmpty(org) ? org : throw new ArgumentNullException(nameof(org)); + App = !string.IsNullOrEmpty(app) ? app : throw new ArgumentNullException(nameof(app)); + AppsEnvironment = !string.IsNullOrEmpty(appsEnvironment) ? appsEnvironment : throw new ArgumentNullException(nameof(appsEnvironment)); + AdditionalData = additionalData ?? new Dictionary<string, string>(); + } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.public record StudioStatisticsModel { /// <summary> /// Gets the name of the event. /// </summary> public string EventName { get; init; } /// <summary> /// Gets the environment where the event occurred. /// </summary> public string Environment { get; init; } /// <summary> /// Gets the description of the event. /// </summary> public string Description { get; init; } /// <summary> /// Gets the organization identifier. /// </summary> public string Org { get; init; } /// <summary> /// Gets the application identifier. /// </summary> public string App { get; init; } /// <summary> /// Gets the apps environment. /// </summary> public string AppsEnvironment { get; init; } /// <summary> /// Gets additional data associated with the event. /// </summary> public Dictionary<string, string> AdditionalData { get; init; } /// <summary> /// Initializes a new instance of the <see cref="StudioStatisticsModel"/> class. /// </summary> public StudioStatisticsModel(string eventName, string environment, string description, string org, string app, string appsEnvironment, Dictionary<string, string> additionalData) { EventName = !string.IsNullOrEmpty(eventName) ? eventName : throw new ArgumentNullException(nameof(eventName)); Environment = !string.IsNullOrEmpty(environment) ? environment : throw new ArgumentNullException(nameof(environment)); Description = description; Org = !string.IsNullOrEmpty(org) ? org : throw new ArgumentNullException(nameof(org)); App = !string.IsNullOrEmpty(app) ? app : throw new ArgumentNullException(nameof(app)); AppsEnvironment = !string.IsNullOrEmpty(appsEnvironment) ? appsEnvironment : throw new ArgumentNullException(nameof(appsEnvironment)); AdditionalData = additionalData ?? new Dictionary<string, string>(); } }
backend/src/Designer/Configuration/KafkaSettings.cs (1)
5-14: 🛠️ Refactor suggestion
Enhance security and maintainability of Kafka settings.
A few suggestions to improve the configuration:
- Make properties immutable to prevent unintended configuration changes.
- Add validation for required fields.
- Add XML documentation for public members.
- Consider using a secure configuration provider for sensitive information.
public class KafkaSettings : ISettingsMarker { - public string BootstrapServers { get; set; } - public string KafkaUserName { get; set; } - public string KafkaPassword { get; set; } - public string SchemaRegistryUrl { get; set; } - public string SchemaRegistryUserName { get; set; } - public string SchemaRegistryPassword { get; set; } - public string StatisticsTopic { get; set; } + /// <summary> + /// Gets the Kafka bootstrap servers. + /// </summary> + public string BootstrapServers { get; init; } + + /// <summary> + /// Gets the Kafka username. + /// </summary> + public string KafkaUserName { get; init; } + + /// <summary> + /// Gets the Kafka password. + /// </summary> + public string KafkaPassword { get; init; } + + /// <summary> + /// Gets the Schema Registry URL. + /// </summary> + public string SchemaRegistryUrl { get; init; } + + /// <summary> + /// Gets the Schema Registry username. + /// </summary> + public string SchemaRegistryUserName { get; init; } + + /// <summary> + /// Gets the Schema Registry password. + /// </summary> + public string SchemaRegistryPassword { get; init; } + + /// <summary> + /// Gets the statistics topic name. + /// </summary> + public string StatisticsTopic { get; init; } + + /// <summary> + /// Initializes a new instance of the <see cref="KafkaSettings"/> class. + /// </summary> + public KafkaSettings() + { + // Initialize with values from configuration + } + + /// <summary> + /// Validates the settings. + /// </summary> + public void Validate() + { + if (string.IsNullOrEmpty(BootstrapServers)) + throw new ArgumentNullException(nameof(BootstrapServers)); + if (string.IsNullOrEmpty(SchemaRegistryUrl)) + throw new ArgumentNullException(nameof(SchemaRegistryUrl)); + if (string.IsNullOrEmpty(StatisticsTopic)) + throw new ArgumentNullException(nameof(StatisticsTopic)); + } }Consider using Azure Key Vault or a similar secure configuration provider for sensitive information like passwords. This helps prevent accidental exposure of credentials in logs or error messages.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.public class KafkaSettings : ISettingsMarker { /// <summary> /// Gets the Kafka bootstrap servers. /// </summary> public string BootstrapServers { get; init; } /// <summary> /// Gets the Kafka username. /// </summary> public string KafkaUserName { get; init; } /// <summary> /// Gets the Kafka password. /// </summary> public string KafkaPassword { get; init; } /// <summary> /// Gets the Schema Registry URL. /// </summary> public string SchemaRegistryUrl { get; init; } /// <summary> /// Gets the Schema Registry username. /// </summary> public string SchemaRegistryUserName { get; init; } /// <summary> /// Gets the Schema Registry password. /// </summary> public string SchemaRegistryPassword { get; init; } /// <summary> /// Gets the statistics topic name. /// </summary> public string StatisticsTopic { get; init; } /// <summary> /// Initializes a new instance of the <see cref="KafkaSettings"/> class. /// </summary> public KafkaSettings() { // Initialize with values from configuration } /// <summary> /// Validates the settings. /// </summary> public void Validate() { if (string.IsNullOrEmpty(BootstrapServers)) throw new ArgumentNullException(nameof(BootstrapServers)); if (string.IsNullOrEmpty(SchemaRegistryUrl)) throw new ArgumentNullException(nameof(SchemaRegistryUrl)); if (string.IsNullOrEmpty(StatisticsTopic)) throw new ArgumentNullException(nameof(StatisticsTopic)); } }
backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs (1)
7-7:
⚠️ Potential issueRemove unnecessary using directive.
The using directive for
Altinn.Studio.Designer.TypedHttpClients.AzureDevOps.Models
is not used in this file.-using Altinn.Studio.Designer.TypedHttpClients.AzureDevOps.Models;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
🧰 Tools
🪛 GitHub Check: Format check
[failure] 7-7:
Using directive is unnecessary.🪛 GitHub Actions: Dotnet format check
[error] 7-7: IDE0005: Using directive is unnecessary.
development/kafka/compose.yaml (1)
24-28:
⚠️ Potential issueMove credentials to environment variables.
Hardcoded credentials in the compose file pose a security risk. Consider moving them to a separate .env file.
Create a
.env
file and update the compose file to use environment variables:- username="admin" - password="Test1234$" - user_admin="Test1234$" - user_user="Test1234$"; + username="${KAFKA_ADMIN_USERNAME}" + password="${KAFKA_ADMIN_PASSWORD}" + user_admin="${KAFKA_ADMIN_PASSWORD}" + user_user="${KAFKA_USER_PASSWORD}";Similar changes should be made for Schema Registry and Kafdrop credentials.
Also applies to: 44-45, 57-58, 61-65
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (6)
backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs (3)
67-67
: Consider performance optimization for the query.
- The case-insensitive string comparison might affect index usage. Consider normalizing the
Result
column in the database.- The complex WHERE clause might benefit from a composite index.
61-77
: Reduce code duplication with the Get method.The query construction pattern is similar to the
Get
method. Consider extracting common query logic into a private helper method.+private IQueryable<DeploymentDbModel> CreateBaseQuery(string org, string app, DocumentQueryModel query) +{ + var deploymentsQuery = _dbContext.Deployments.Include(d => d.Build).AsNoTracking() + .Where(x => x.Org == org && x.App == app); + + deploymentsQuery = query.SortDirection == SortDirection.Descending + ? deploymentsQuery.OrderByDescending(d => d.Created) + : deploymentsQuery.OrderBy(d => d.Created); + + return deploymentsQuery.Take(query.Top ?? int.MaxValue); +} public async Task<IEnumerable<DeploymentEntity>> GetSucceeded(string org, string app, string environment, DocumentQueryModel query) { Guard.AssertArgumentNotNullOrWhiteSpace(environment, nameof(environment)); Guard.AssertArgumentNotNullOrWhiteSpace(org, nameof(org)); Guard.AssertArgumentNotNullOrWhiteSpace(app, nameof(app)); - var deploymentsQuery = _dbContext.Deployments.Include(d => d.Build).AsNoTracking().Where(x => x.Org == org && x.App == app && x.EnvName == environment && x.Build.Result.Equals("succeeded", System.StringComparison.CurrentCultureIgnoreCase)); - - deploymentsQuery = query.SortDirection == SortDirection.Descending - ? deploymentsQuery.OrderByDescending(d => d.Created) - : deploymentsQuery.OrderBy(d => d.Created); - - deploymentsQuery = deploymentsQuery.Take(query.Top ?? int.MaxValue); + var deploymentsQuery = CreateBaseQuery(org, app, query) + .Where(x => x.EnvName == environment && + x.Build.Result.Equals("succeeded", System.StringComparison.CurrentCultureIgnoreCase)); var dbObjects = await deploymentsQuery.ToListAsync(); return DeploymentMapper.MapToModels(dbObjects); }
67-67
: Consider using a constant for the "succeeded" status.The string literal "succeeded" should be defined as a constant to improve maintainability.
+private const string SucceededStatus = "succeeded"; public async Task<IEnumerable<DeploymentEntity>> GetSucceeded(string org, string app, string environment, DocumentQueryModel query) { // ... - var deploymentsQuery = _dbContext.Deployments.Include(d => d.Build).AsNoTracking().Where(x => x.Org == org && x.App == app && x.EnvName == environment && x.Build.Result.Equals("succeeded", System.StringComparison.CurrentCultureIgnoreCase)); + var deploymentsQuery = _dbContext.Deployments.Include(d => d.Build).AsNoTracking().Where(x => x.Org == org && x.App == app && x.EnvName == environment && x.Build.Result.Equals(SucceededStatus, System.StringComparison.CurrentCultureIgnoreCase)); // ... }backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (2)
21-30
: Consider using IOptions pattern for KafkaSettings.For better configuration management and runtime updates support, consider injecting
IOptions<KafkaSettings>
instead of directly injectingKafkaSettings
.- private readonly KafkaSettings _kafkaSettings; + private readonly IOptions<KafkaSettings> _kafkaSettings; - public DeploymentPipelineCompletedStatisticsHandler(KafkaSettings kafkaSettings, IDeploymentRepository deploymentRepository, IWebHostEnvironment hostingEnvironment) + public DeploymentPipelineCompletedStatisticsHandler(IOptions<KafkaSettings> kafkaSettings, IDeploymentRepository deploymentRepository, IWebHostEnvironment hostingEnvironment) { _kafkaSettings = kafkaSettings; _deploymentRepository = deploymentRepository; _hostingEnvironment = hostingEnvironment; }
76-92
: Remove duplicate environment value.The environment value is duplicated in both
Environment
andAppsEnvironment
fields. Consider using only one field or clarify the difference between these fields if they serve different purposes.private async Task<StudioStatisticsModel> CalculateStudioStatistics(Events.DeploymentPipelineCompleted notification) { StudioStatisticsEvent studioEvent = await CalculateStudioStatisticsEvent(notification); return new StudioStatisticsModel { EventName = studioEvent.Name, Description = studioEvent.Description, Environment = notification.Environment, Org = notification.EditingContext.Org, App = notification.EditingContext.Repo, - AppsEnvironment = notification.Environment, AdditionalData = new Dictionary<string, string>() { { "studioEnvironment", _hostingEnvironment.EnvironmentName } } }; }
development/kafka/compose.yaml (1)
49-68
: Improve Kafdrop configuration.
- Credentials are hardcoded in the configuration
- JVM memory settings are very limited (32M-64M)
Consider:
- Using environment variables for credentials
- Adjusting JVM memory settings based on expected usage
environment: KAFKA_BROKERCONNECT: kafka:9092 SCHEMAREGISTRY_CONNECT: schemaregistry:8081 - SCHEMAREGISTRY_AUTH: schemauser:Test1234$ + SCHEMAREGISTRY_AUTH: ${SCHEMA_REGISTRY_USERNAME}:${SCHEMA_REGISTRY_PASSWORD} - JVM_OPTS: "-Xms32M -Xmx64M" + JVM_OPTS: "-Xms64M -Xmx128M" KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafdrop_jaas.conf" KAFKA_CLIENT_JAAS_CONF: | KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required - username="admin" - password="Test1234$"; + username="${KAFKA_ADMIN_USERNAME}" + password="${KAFKA_ADMIN_PASSWORD}"; };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
backend/packagegroups/NuGet.props
(1 hunks)backend/src/Designer/Configuration/KafkaSettings.cs
(1 hunks)backend/src/Designer/Designer.csproj
(1 hunks)backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
(1 hunks)backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsEvent.cs
(1 hunks)backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsModel.cs
(1 hunks)backend/src/Designer/Events/DeploymentPipelineCompleted.cs
(1 hunks)backend/src/Designer/Repository/IDeploymentRepository.cs
(1 hunks)backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs
(2 hunks)backend/src/Designer/Scheduling/DeploymentPipelinePollingJob.cs
(3 hunks)development/kafka/compose.yaml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- backend/src/Designer/Repository/IDeploymentRepository.cs
- backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsModel.cs
- backend/packagegroups/NuGet.props
- backend/src/Designer/Events/DeploymentPipelineCompleted.cs
- backend/src/Designer/Designer.csproj
- backend/src/Designer/Configuration/KafkaSettings.cs
- backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsEvent.cs
- backend/src/Designer/Scheduling/DeploymentPipelinePollingJob.cs
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Run dotnet build and test (macos-latest)
- GitHub Check: Run dotnet build and test (windows-latest)
🔇 Additional comments (2)
backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs (1)
10-10
: LGTM!The Guard helper is appropriately imported for parameter validation.
development/kafka/compose.yaml (1)
2-9
:❓ Verification inconclusive
Secure Zookeeper configuration for production use.
The current configuration allows anonymous login which is acceptable for development but should not be used in production. Consider:
- Using environment variables for credentials
- Documenting that this is a development-only configuration
🏁 Script executed:
#!/bin/bash # Check if there are any production configuration files fd -g "*prod*.yaml" -g "*prod*.yml"Length of output: 135
Secure Zookeeper configuration for production use
This configuration in
development/kafka/compose.yaml
enables anonymous login via the Bitnami Zookeeper image. While this might be acceptable in a development environment, it poses a security risk if accidentally used in production. Please ensure that:
- If this configuration is meant solely for development, clearly document its intended scope.
- For production environments, switch to a configuration that does not allow anonymous login—preferably by using environment variables to manage credentials securely.
- Manually verify that no production configuration file (e.g., files hinting at "prod") inadvertently reuses this insecure setup.
backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs
Outdated
Show resolved
Hide resolved
...er/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
Outdated
Show resolved
Hide resolved
...er/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
Outdated
Show resolved
Hide resolved
...er/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
Outdated
Show resolved
Hide resolved
...er/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (2)
44-44
: Configure JsonSerializerConfig properties.The JsonSerializerConfig is instantiated but not configured. Consider setting appropriate properties like AutoRegisterSchemas, UseLatestVersion, etc.
- var jsonSerializerConfig = new JsonSerializerConfig(); + var jsonSerializerConfig = new JsonSerializerConfig + { + AutoRegisterSchemas = true, + UseLatestVersion = true + };
83-86
: Remove duplicate environment information.The environment is stored twice in the model: once in
Environment
and once inAppsEnvironment
.return new StudioStatisticsModel { EventName = studioEvent.Name, Description = studioEvent.Description, Environment = notification.Environment, Org = notification.EditingContext.Org, App = notification.EditingContext.Repo, - AppsEnvironment = notification.Environment, AdditionalData = new Dictionary<string, string>()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: Run dotnet build and test (macos-latest)
- GitHub Check: Build environment and run e2e test
- GitHub Check: Run dotnet build and test (windows-latest)
- GitHub Check: Analyze
- GitHub Check: Run integration tests against actual gitea and db
- GitHub Check: Run dotnet build and test (ubuntu-latest)
🔇 Additional comments (6)
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (6)
1-30
: LGTM! Well-structured class with appropriate dependencies.The class follows dependency injection best practices and has a clean structure.
32-40
: Add error handling for Kafka producer operations.The Kafka producer operation could fail. Consider adding try-catch block to handle potential exceptions and implement retry logic.
51-59
: Add missing security configuration for Kafka producer.The Kafka broker is configured with SASL PLAIN authentication, but the producer configuration is missing the required security settings.
61-64
: Add error handling for Schema Registry client creation.The Schema Registry client creation could fail. Consider adding try-catch block and implementing proper error handling.
94-114
: LGTM! Clear and well-structured event type determination.The logic for determining event types is clear and includes appropriate validation.
132-133
: Simplify and clarify the update detection logic.The update determination uses two different thresholds that could lead to non-obvious edge cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
♻️ Duplicate comments (3)
backend/src/Designer/Services/Implementation/KafkaProducer.cs (2)
32-39
:⚠️ Potential issueAdd missing security configuration for Kafka producer.
The Kafka producer configuration is missing the required security settings for SASL PLAIN authentication.
51-64
: 🛠️ Refactor suggestionAdd error handling for Schema Registry client creation.
The Schema Registry client creation could fail. Consider adding try-catch block.
backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (1)
30-34
: 🛠️ Refactor suggestionAdd error handling for Kafka producer operations.
The Handle method should catch and handle Kafka producer exceptions appropriately.
public async Task Handle(Events.DeploymentPipelineCompleted notification, CancellationToken cancellationToken) { - var studioStatistics = await CalculateStudioStatistics(notification); - await _kafkaProducer.ProduceAsync(studioStatistics, cancellationToken); + try + { + var studioStatistics = await CalculateStudioStatistics(notification); + await _kafkaProducer.ProduceAsync(studioStatistics, cancellationToken); + } + catch (Exception ex) + { + // Consider logging the error or implementing retry logic + throw new Exception($"Failed to handle deployment pipeline completion: {ex.Message}", ex); + } }
🧹 Nitpick comments (9)
backend/src/Designer/Models/StudioStatisticsModel.cs (1)
5-14
: Consider making properties immutable and adding validation.While the model structure is good, consider these improvements:
- Make properties immutable since this is a record type
- Add validation attributes for required fields
- Consider using init-only properties
Example refactor:
public record StudioStatisticsModel { - public string EventName { get; set; } - public string Environment { get; set; } - public string Description { get; set; } - public string Org { get; set; } - public string App { get; set; } - public string AppsEnvironment { get; set; } - public Dictionary<string, string> AdditionalData { get; set; } + [Required] + public string EventName { get; init; } + [Required] + public string Environment { get; init; } + public string Description { get; init; } + [Required] + public string Org { get; init; } + [Required] + public string App { get; init; } + public string AppsEnvironment { get; init; } + public IReadOnlyDictionary<string, string> AdditionalData { get; init; } }backend/src/Designer/Helpers/AltinnStudioRepositoryScanner.cs (1)
14-17
: Consider making the path configurable and adding existence check.The method follows the class pattern but could be improved:
- Consider making the path configurable
- Add validation to check if the file exists
Example refactor:
+ private const string DEFAULT_KAFKA_COMPOSE_PATH = "development/kafka/compose.yaml"; + - public static string FindKafkaComposerFilePath() + public static string FindKafkaComposerFilePath(string relativePath = DEFAULT_KAFKA_COMPOSE_PATH) { - return Path.Combine(FindRootDirectoryPath(), "development", "kafka", "compose.yaml"); + string fullPath = Path.Combine(FindRootDirectoryPath(), relativePath); + return File.Exists(fullPath) + ? fullPath + : throw new FileNotFoundException($"Kafka compose file not found at {fullPath}"); }backend/tests/Designer.Tests/Kafka/KafkaProducerTests.cs (2)
27-56
: Add test cases for error scenarios.The test suite only covers the happy path. Consider adding test cases for:
- Schema registry failures
- Network issues
- Authentication failures
58-62
: Add error handling for Docker Compose cleanup.The cleanup operation in Dispose could fail silently. Consider logging the error.
public void Dispose() { - // try to stop kafka stack - CommandExecutor.TryExecute($"docker compose -f {_kafkaCompooseFilePath} down", out _, out _); + try + { + if (!CommandExecutor.TryExecute($"docker compose -f {_kafkaCompooseFilePath} down", out _, out string error)) + { + Console.WriteLine($"Failed to stop kafka stack: {error}"); + } + } + catch (Exception ex) + { + Console.WriteLine($"Error during kafka stack cleanup: {ex.Message}"); + } }backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs (1)
82-94
: Simplify and document the update detection logic.The update detection logic uses different thresholds for successful and failed deployments. Consider extracting these thresholds into constants and adding detailed documentation.
+ private const int SUCCESSFUL_DEPLOYMENT_UPDATE_THRESHOLD = 1; + private const int FAILED_DEPLOYMENT_UPDATE_THRESHOLD = 0; + private async Task<bool> IsAppUpdated(Events.DeploymentPipelineCompleted notification) { var deploymentsInEnvironment = (await _deploymentRepository.GetSucceeded( notification.EditingContext.Org, notification.EditingContext.Repo, notification.Environment, new DocumentQueryModel { SortDirection = SortDirection.Descending } )).ToList(); - return (notification.Succeeded && deploymentsInEnvironment.Count > 1) || - (!notification.Succeeded && deploymentsInEnvironment.Count > 0); + // For successful deployments, we need more than one successful deployment to consider it an update + // For failed deployments, we need at least one successful deployment to consider it an update attempt + return (notification.Succeeded && deploymentsInEnvironment.Count > SUCCESSFUL_DEPLOYMENT_UPDATE_THRESHOLD) || + (!notification.Succeeded && deploymentsInEnvironment.Count > FAILED_DEPLOYMENT_UPDATE_THRESHOLD); }backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/GetSucceededTests.cs (2)
40-42
: Remove duplicate assertion.The same assertion is repeated twice.
Assert.Equal(top, result.Count); -Assert.Equal(top, result.Count);
91-101
: Add more test cases for edge scenarios.Consider adding test cases for:
- Empty result sets
- Maximum value for top parameter
- Invalid environment names
backend/src/Designer/Configuration/Extensions/ServiceCollectionExtensions.cs (1)
94-109
: Consider using IOptions pattern for singleton settings.While the implementation works, it deviates from the IOptions pattern used in the scoped settings registration. Using IOptions provides several benefits:
- Configuration reloading support
- Strongly typed access to settings
- Consistent pattern across the codebase
Consider this alternative implementation:
private static void ConfigureSettingsTypeBySectionSingleton<TOption>(this IServiceCollection services, IConfiguration configuration, string sectionName) where TOption : class, new() { - var options = new TOption(); - configuration.GetSection(sectionName).Bind(options); - services.TryAddSingleton(typeof(TOption), _ => options); + services.Configure<TOption>(configuration.GetSection(sectionName)); + services.TryAddSingleton(typeof(TOption), svc => svc.GetRequiredService<IOptions<TOption>>().Value); }development/kafka/compose.yaml (1)
46-57
: Init-Kafka Command Block: Clarify Quotation Usage
The multi-line command block is enclosed by an extra starting and ending double quote. This can sometimes lead to shell parsing issues or reduced readability. Consider using a block scalar (using|
or>
) without extra quotes to improve clarity.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
.github/workflows/designer-dotnet-test.yaml
(1 hunks).github/workflows/gitea-designer-integration-tests.yaml
(3 hunks)backend/src/Designer/Configuration/Extensions/ServiceCollectionExtensions.cs
(1 hunks)backend/src/Designer/Configuration/KafkaSettings.cs
(1 hunks)backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
(1 hunks)backend/src/Designer/Helpers/AltinnStudioRepositoryScanner.cs
(1 hunks)backend/src/Designer/Infrastructure/ServiceRegistration.cs
(1 hunks)backend/src/Designer/Models/StudioStatisticsModel.cs
(1 hunks)backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs
(2 hunks)backend/src/Designer/Services/Implementation/KafkaProducer.cs
(1 hunks)backend/src/Designer/Services/Interfaces/IKafkaProducer.cs
(1 hunks)backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/GetSucceededTests.cs
(1 hunks)backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/Utils/EntityGenerationUtils.cs
(1 hunks)backend/tests/Designer.Tests/Kafka/KafkaProducerTests.cs
(1 hunks)backend/tests/Designer.Tests/Kafka/LocalKafkaConstants.cs
(1 hunks)development/kafka/compose.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- backend/tests/Designer.Tests/Kafka/LocalKafkaConstants.cs
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/src/Designer/Configuration/KafkaSettings.cs
- backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Run dotnet build and test (macos-latest)
- GitHub Check: Build environment and run e2e test
- GitHub Check: Run dotnet build and test (windows-latest)
- GitHub Check: Analyze
- GitHub Check: Run dotnet build and test (ubuntu-latest)
🔇 Additional comments (10)
backend/src/Designer/Services/Interfaces/IKafkaProducer.cs (1)
7-10
: LGTM! Well-designed interface.The interface follows C# best practices:
- Follows TAP pattern with async suffix
- Properly handles cancellation
- Returns the model to confirm delivery
backend/src/Designer/Services/Implementation/KafkaProducer.cs (1)
12-19
: LGTM!The class follows dependency injection pattern and has a clean constructor implementation.
backend/src/Designer/Infrastructure/ServiceRegistration.cs (1)
80-81
: LGTM! Clean and focused service registration.The Kafka services are registered with appropriate lifetimes:
KafkaSettings
as singleton since it's configuration dataIKafkaProducer
as transient since it's a stateless servicedevelopment/kafka/compose.yaml (7)
1-9
: Zookeeper Configuration: Standard Setup for Development
The zookeeper service uses the Bitnami image with anonymous login enabled. This is typically acceptable for a development environment. For production use, consider tightening security by disabling anonymous login or configuring proper authentication.
10-29
: Kafka Service: Hardcoded Credentials in JAAS Configuration
Within the Kafka service's environment, the SASL JAAS configuration (lines 25–28) contains hardcoded credentials. Externalizing these credentials using environment variables or Docker secrets is recommended to improve security and maintainability.
30-35
: Kafka Health Check: Appropriate Configuration
The health check uses the Kafka broker API versions script with suitable interval, timeout, and retry settings. This setup helps ensure the Kafka service is operational before dependent services start.
36-45
: Init-Kafka Service: Externalize Client Credentials
The init-kafka service specifies hardcoded values forKAFKA_CLIENT_USERNAME
andKAFKA_CLIENT_PASSWORD
(lines 44–45). For enhanced security, consider externalizing these credentials using environment variables or Docker secrets.
58-70
: Schema Registry: Avoid Hardcoded Authentication Credentials
The Schema Registry configuration includes hardcoded credentials forSCHEMA_REGISTRY_BASIC_AUTH_USER_INFO
(line 70). It is advisable to replace these with environment variables or Docker secrets to prevent potential security issues.
74-90
: Kafdrop Service: Hardcoded Credentials in Authentication and JAAS Config
Kafdrop’s environment includes hardcoded credentials in theSCHEMAREGISTRY_AUTH
variable (line 82) and within theKAFKA_CLIENT_JAAS_CONF
block (lines 85–90). For security and easier maintenance, consider externalizing these sensitive values.
91-94
: Kafdrop Dependencies: Correct Service Dependency Setup
Thedepends_on
configuration for Kafdrop (lines 91–94) appropriately ensures that Kafdrop starts only after both Kafka and Schema Registry are available, which helps maintain service order during startup.
.../Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/Utils/EntityGenerationUtils.cs
Show resolved
Hide resolved
e74af06
to
6e14bd1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/designer-kafka-integration-tests.yaml (1)
28-31
: Local IP Address Retrieval:
This step retrieves the local IP address using a shell command (ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -d/ -f1
). If the IP is intended for configuration or logging purposes later in the workflow, consider capturing the output into an environment variable. If it's solely for diagnostic output, this implementation is sufficient.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/designer-gitea-and-db-integration-tests copy.yaml
(3 hunks).github/workflows/designer-kafka-integration-tests.yaml
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: Build environment and run e2e test
- GitHub Check: Run dotnet build and test (macos-latest)
- GitHub Check: Run dotnet build and test (windows-latest)
- GitHub Check: Analyze
- GitHub Check: Run dotnet build and test (ubuntu-latest)
- GitHub Check: Run integration tests against actual gitea and dbs
🔇 Additional comments (8)
.github/workflows/designer-gitea-and-db-integration-tests copy.yaml (3)
1-1
: Workflow Name Update:
The workflow name "Run gitea and db integration tests" now clearly communicates the extended functionality to cover both Gitea and database integration tests. Make sure that any related documentation or tooling references are updated accordingly.
18-19
: Job Naming and Description Update:
Renaming the job identifier to "gitea-db-integration-tests" and updating its description to "Run integration tests against actual gitea and dbs" improves clarity and conciseness. Consider verifying consistency with naming conventions used in other workflows.
35-37
: Integration Tests Step Update:
Renaming the step to "Run integration tests" and updating the test filter to combine "(Category=GiteaIntegrationTest)|(Category=DbIntegrationTest)" appropriately broadens the scope of tests executed. Please ensure that both the Gitea and DB integration tests are correctly tagged in the test suite so that the filter behaves as expected..github/workflows/designer-kafka-integration-tests.yaml (5)
1-13
: Workflow Trigger and Naming Setup:
The workflow is named "Run kafka integration tests" and is configured to trigger on pushes, pull requests, and manual dispatches while watching files under thedevelopment/kafka/**
path. This targeted trigger is effective for Kafka-specific changes. Verify that this restriction aligns with your intended test coverage—if Kafka tests depend on changes elsewhere (e.g., in the backend), you might want to adjust the paths accordingly.
14-20
: Job Configuration and Environment Variables:
The job "kafka-integration-tests" is well defined to run on theubuntu-latest
runner with the environment variableDOTNET_HOSTBUILDER__RELOADCONFIGONCHANGE
set tofalse
to maintain consistent configuration behavior during tests. This setup appears standard for .NET-based integration testing.
21-27
: .NET Setup and Code Checkout:
The steps for setting up .NET (usingactions/setup-dotnet@v4
with version9.0.x
) and checking out the repository (usingactions/checkout@v4
) follow industry best practices and are clear in their intent.
32-35
: Kafka Services Initialization:
Starting the Kafka-related services via Docker Compose using the filedevelopment/kafka/compose.yaml
is an efficient approach. Ensure that the compose file is maintained and that the services are fully healthy before proceeding with test execution.
36-38
: Running Kafka Integration Tests:
The integration tests are executed with a clear filter (Category=KafkaIntegrationTest
) on the .NET solution. Confirm that the tests in your solution are consistently tagged with this category to ensure all relevant tests run correctly.
786917c
to
c080641
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (2)
backend/tests/Designer.Tests/Kafka/KafkaProducerTests.cs (1)
56-60
: Add error handling for Docker Compose cleanup.The cleanup operation in the
Dispose
method should handle potential errors to ensure proper resource cleanup.public void Dispose() { // try to stop kafka stack - CommandExecutor.TryExecute($"docker compose -f {_kafkaCompooseFilePath} down", out _, out _); + if (!CommandExecutor.TryExecute($"docker compose -f {_kafkaCompooseFilePath} down", out _, out string error)) + { + // Log the error but don't throw as this is cleanup + Console.WriteLine($"Failed to stop kafka stack: {error}"); + } }backend/src/Designer/Configuration/Extensions/ServiceCollectionExtensions.cs (1)
94-101
: Add XML documentation for public method.Add XML documentation to maintain consistency with other public methods in the class.
+/// <summary> +/// Registers settings as a singleton service. +/// Settings configuration will be read from the configuration, and the settings section name will be the same as the class name unless specified. +/// </summary> +/// <param name="services">The <see cref="IServiceCollection"/> to add the service to.</param> +/// <param name="configuration">An <see cref="IConfiguration"/> holding the configuration of the app.</param> +/// <param name="section">Optional section name. If not provided, the type name will be used.</param> +/// <typeparam name="TOption">The type of the options class to configure.</typeparam> +/// <returns>A reference to this instance after the operation has completed.</returns> public static IServiceCollection RegisterSettingsSingleton<TOption>(this IServiceCollection services, IConfiguration configuration, string section = null)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.github/workflows/designer-gitea-and-db-integration-tests copy.yaml
(3 hunks)backend/packagegroups/NuGet.props
(1 hunks)backend/src/Designer/Configuration/Extensions/ServiceCollectionExtensions.cs
(1 hunks)backend/src/Designer/Configuration/KafkaSettings.cs
(1 hunks)backend/src/Designer/Designer.csproj
(1 hunks)backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
(1 hunks)backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsEvent.cs
(1 hunks)backend/src/Designer/Events/DeploymentPipelineCompleted.cs
(1 hunks)backend/src/Designer/Events/DeploymentPipelineQueued.cs
(1 hunks)backend/src/Designer/Helpers/AltinnStudioRepositoryScanner.cs
(1 hunks)backend/src/Designer/Infrastructure/ServiceRegistration.cs
(1 hunks)backend/src/Designer/Models/StudioStatisticsModel.cs
(1 hunks)backend/src/Designer/Repository/IDeploymentRepository.cs
(1 hunks)backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs
(2 hunks)backend/src/Designer/Scheduling/DeploymentPipelinePollingJob.cs
(3 hunks)backend/src/Designer/Services/Implementation/KafkaProducer.cs
(1 hunks)backend/src/Designer/Services/Interfaces/IKafkaProducer.cs
(1 hunks)backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/GetSucceededTests.cs
(1 hunks)backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/Utils/EntityGenerationUtils.cs
(1 hunks)backend/tests/Designer.Tests/Kafka/KafkaProducerTests.cs
(1 hunks)backend/tests/Designer.Tests/Kafka/LocalKafkaConstants.cs
(1 hunks)development/kafka/compose.yaml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- backend/src/Designer/Helpers/AltinnStudioRepositoryScanner.cs
- backend/src/Designer/Services/Interfaces/IKafkaProducer.cs
- backend/src/Designer/Configuration/KafkaSettings.cs
- backend/src/Designer/Events/DeploymentPipelineCompleted.cs
- backend/src/Designer/Designer.csproj
- backend/src/Designer/Models/StudioStatisticsModel.cs
- backend/src/Designer/Infrastructure/ServiceRegistration.cs
- backend/packagegroups/NuGet.props
- backend/tests/Designer.Tests/Kafka/LocalKafkaConstants.cs
- backend/src/Designer/Repository/IDeploymentRepository.cs
- backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/StudioStatisticsEvent.cs
- backend/src/Designer/Repository/ORMImplementation/DeploymentRepository.cs
- .github/workflows/designer-gitea-and-db-integration-tests copy.yaml
- backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/Utils/EntityGenerationUtils.cs
- backend/src/Designer/Events/DeploymentPipelineQueued.cs
- development/kafka/compose.yaml
- backend/tests/Designer.Tests/DbIntegrationTests/DeploymentEntityRepository/GetSucceededTests.cs
- backend/src/Designer/EventHandlers/DeploymentPipelineCompleted/DeploymentPipelineCompletedStatisticsHandler.cs
- backend/src/Designer/Scheduling/DeploymentPipelinePollingJob.cs
🔇 Additional comments (3)
backend/tests/Designer.Tests/Kafka/KafkaProducerTests.cs (1)
16-24
: Add error handling for Docker Compose operations.The Docker Compose operations could fail. Consider adding cleanup in case of startup failure.
backend/src/Designer/Services/Implementation/KafkaProducer.cs (2)
21-30
: Add error handling for Kafka producer operations.The Kafka producer operation could fail. Consider adding try-catch block to handle potential exceptions.
41-49
: Add security protocol and SASL mechanism configuration.The producer configuration is missing essential security settings.
backend/src/Designer/Configuration/Extensions/ServiceCollectionExtensions.cs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
backend/src/Designer/Services/Implementation/KafkaProducer.cs (2)
21-30
: 🛠️ Refactor suggestionAdd error handling for Kafka producer operations.
The Kafka producer operation could fail. Consider adding try-catch block to handle potential exceptions.
public async Task<StudioStatisticsModel> ProduceAsync(StudioStatisticsModel studioStatisticsModel, CancellationToken cancellationToken = default) { - using var producer = GetProducer(); - await producer.ProduceAsync(_kafkaSettings.StatisticsTopic, new Message<Null, StudioStatisticsModel> - { - Value = studioStatisticsModel - }, cancellationToken); - - return studioStatisticsModel; + try + { + using var producer = GetProducer(); + await producer.ProduceAsync(_kafkaSettings.StatisticsTopic, new Message<Null, StudioStatisticsModel> + { + Value = studioStatisticsModel + }, cancellationToken); + + return studioStatisticsModel; + } + catch (ProduceException<Null, StudioStatisticsModel> ex) + { + throw new Exception($"Failed to produce message to Kafka: {ex.Message}", ex); + } }
61-64
: 🛠️ Refactor suggestionAdd error handling for schema registry operations.
The schema registry client creation and configuration should handle potential errors.
private CachedSchemaRegistryClient GetSchemaRegistryClient() { - return new CachedSchemaRegistryClient(GetSchemaRegistryConfig()); + try + { + var config = GetSchemaRegistryConfig(); + return new CachedSchemaRegistryClient(config); + } + catch (Exception ex) + { + throw new Exception($"Failed to create schema registry client: {ex.Message}", ex); + } }
🧹 Nitpick comments (1)
backend/src/Designer/Services/Implementation/KafkaProducer.cs (1)
66-74
: Consider adding validation for schema registry configuration.The schema registry configuration is missing validation for required fields. This could lead to runtime errors if any of these configuration values are null or empty.
private SchemaRegistryConfig GetSchemaRegistryConfig() { + if (string.IsNullOrEmpty(_kafkaSettings.SchemaRegistryUrl)) + { + throw new ArgumentException("Schema registry URL cannot be null or empty", nameof(_kafkaSettings.SchemaRegistryUrl)); + } + + if (string.IsNullOrEmpty(_kafkaSettings.SchemaRegistryUserName) || string.IsNullOrEmpty(_kafkaSettings.SchemaRegistryPassword)) + { + throw new ArgumentException("Schema registry credentials cannot be null or empty"); + } + return new SchemaRegistryConfig { Url = _kafkaSettings.SchemaRegistryUrl, BasicAuthCredentialsSource = AuthCredentialsSource.UserInfo, BasicAuthUserInfo = $"{_kafkaSettings.SchemaRegistryUserName}:{_kafkaSettings.SchemaRegistryPassword}" }; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/src/Designer/Configuration/EidLoggerClientSettings.cs
(0 hunks)backend/src/Designer/Configuration/KafkaSettings.cs
(1 hunks)backend/src/Designer/Services/Implementation/KafkaProducer.cs
(1 hunks)
💤 Files with no reviewable changes (1)
- backend/src/Designer/Configuration/EidLoggerClientSettings.cs
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/src/Designer/Configuration/KafkaSettings.cs
🔇 Additional comments (1)
backend/src/Designer/Services/Implementation/KafkaProducer.cs (1)
41-59
: Verify security settings for different environments.The current implementation handles SASL SSL configuration, but you might want to ensure proper handling of other security protocols like SASL Plaintext, which might be needed for different environments.
Consider running tests with different security configurations to ensure the producer can connect successfully in all target environments.
Description
Related Issue(s)
Verification
Documentation
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores