Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TRACKING] Q2: Pipeline Automation of the Green Review for Falco #83

Open
7 of 16 tasks
AntonioDiTuri opened this issue Apr 10, 2024 · 6 comments
Open
7 of 16 tasks

Comments

@AntonioDiTuri
Copy link
Contributor

AntonioDiTuri commented Apr 10, 2024

Intro

Hi everybody! We are back from KubeCon and after the Retro it is time to plan the next Quarter:

During the meeting of the 10th April we outlined the priorities for the next Quarter.

Outcome

We will work on the Pipeline Automation of the Green Review for Falco.
The pipeline will have 3 steps in the scope

  1. Deploy - trigger GitHub Action workflow from an upstream release, including the binary
  2. Run - run benchmark tests through GitHub Action
  3. Report - fetch & store metrics

Other than the pipeline automation we will also work on a fourth proposal:

  1. Benchmarking investigation - What is the best set of benchmarks to run for Cloud Native projects?

This is one of the question that the investigation should answer, more details to follow in the tracking issue

We need a PR drafted for each of those proposal.

Todo

For each proposal we will need to:

  • Create a TRACKING Issue
  • Work on the PR for detailing and structuring the proposal
  • Review & Merge the PR

Proposal 1 - Deploy

Proposal 2 - Benchmark

After deploying the Falco project, we need to run the benchmark tests.
This proposal assumes that the benchmarking are the one already defined here. Look proposal 4 for better understanding why this is relevant.
Objective of this proposal is to document how to integrate the trigger of the benchmark in the GitHub Action

  • Create an ACTION Issue for the proposal: [ACTION] Run #86
  • PR for detailing and structuring the proposal
  • Review & Merge the PR
  • Set up a tracking issue for all the actions related to the proposal and sketch the single actions issues
    Leads: @nikimanoledaki @locomundo

Proposal 3 - Report

After deploying the project and running the benchmarking tests we need to collect the metrics.
At the moment we are just reading the Kepler metrics through Prometheus. We need a long term solution, we also need to discuss if we are intrested in saving lower level metrics (cpu,memory,etc.).

  • Create a ACTION Issue for the proposal: [ACTION] Proposal 3 - Report #95
  • PR for detailing and structuring the proposal
  • Review & Merge the PR
  • Set up a tracking issue for all the actions related to the proposal and sketch the single actions issues
    Leads: @AntonioDiTuri Chris Chinchilla

Proposal 4 - Benchmarking investigation

While we work for the automation of the current solution for Falco we might also want to start the discussion of a standardized set of benchmarking tests we would like to run. It would be good to involve a benchmarking expert because we want to be sure to reproduce a meaningful scenario that reproduces meaningful metrics across different projects we will review

  • Create a ACTION Issue for the proposal: [ACTION] Proposal 4: Benchmarking investigation #103
  • PR for detailing and structuring the proposal
  • Review & Merge the PR
  • Set up a tracking issue for all the actions related to the proposal and sketch the single actions issues
    Leads: TBD

If you want to get involved please drop a comment below

@nikimanoledaki
Copy link
Contributor

Perfect, thank you @AntonioDiTuri! 🎉

PS: There might be relevant information that we can draw from the design doc (which has now been archived), especially from the section 2. Sustainability Assessment Pipeline and on. Some of the info in the design doc has been refactored and moved to our docs but that was mostly for the infrastructure sections, not the pipeline sections.

@AntonioDiTuri AntonioDiTuri changed the title [TRACKING] Proposal Kick-off for Q2 [TRACKING] Proposal Kick-off for Q2: Pipeline Automation of the Green Review for Falco Apr 16, 2024
@AntonioDiTuri AntonioDiTuri changed the title [TRACKING] Proposal Kick-off for Q2: Pipeline Automation of the Green Review for Falco [TRACKING] Q2: Pipeline Automation of the Green Review for Falco, proposals tracking Apr 16, 2024
@AntonioDiTuri AntonioDiTuri changed the title [TRACKING] Q2: Pipeline Automation of the Green Review for Falco, proposals tracking [TRACKING] Q2: Pipeline Automation of the Green Review for Falco Apr 16, 2024
@leonardpahlke
Copy link
Member

leonardpahlke commented Apr 16, 2024

👍 It is good that we are talking about how we can distribute the tasks among the contributors and how we can enable them to contribute.
This is the current structure:

  1. TRIGGER AND PROVIDE
  2. RUN
  3. REPORT

A suggestion from my side would be to look at this:

  1. Pipeline / Automation
  2. Benchmarking
  3. Reporting

  1. Pipeline / Automation
    1. What we can do independently of the other pkgs
      - Pulling releases from Falco based on GH actions
      - Deploying Falco to the cluster triggered by GH actions
      - Documentation how projects can integrate themselves to our platform
      - (potentially) how maintainers can configure supported benchmarking tests for their project
    2. Where we need input from other pkgs
      - Number of Falco deployments per release, which is based on the benchmarking configured for falco
  2. Benchmarking:
    1. What we can do independently of the other pkgs
      - how can we utilize existing tests from falco and other projects for benchmarking
      testing out different benchmarking tools (can be done locally or on a seperate equinix vm)
    2. Where we need input from other pkgs
      - Deploying benchmarking scenarios using the pipeline
  3. Reporting: I need more information on what we are planning here. AFAIK we are already collecting information and forwarding it to Grafana. I may be wrong, but to me this looks much smaller as a package than the previous two. (Especially since we don’t plan to integrate with devstats yet, etc.)

@AntonioDiTuri
Copy link
Contributor Author

AntonioDiTuri commented Apr 16, 2024

Hi Leo thanks for the input!

From what I got from the last meeting I though that those points:

  • Documentation how projects can integrate themselves to our platform
  • (potentially) how maintainers can configure supported benchmarking tests for their project

Are more in scope for Q3, at least that's what I remember from our discussion - will update the roadmap accordingly.

About the naming:
I don't like to name the first package Pipeline/Automation because to me the all derivable of Q2 sounds is a pipeline.
In the context of the pipeline we have the 3 steps: deploy - run (or benchmark fine for me) - report

About reporting:
It is true that the package might look smaller but I see no problem there, let's see the action proposal and let's move the discussion there in case

@leonardpahlke
Copy link
Member

Are more in scope for Q3

Yes, agree. The other points are more important now (and we also need to have the automation in place to write documentaiton about it..) if we have time in Q2 that would be a good task otherwise Q3 material 👍

About the naming

+1

@AntonioDiTuri
Copy link
Contributor Author

Switched me and niki on proposal 2 and 3. Added proposal 4. Clarified proposal 2

@rossf7
Copy link
Contributor

rossf7 commented May 10, 2024

Note for the Report proposal - the SRE metrics requested by the Falco team are listed here.
falcosecurity/cncf-green-review-testing#14 (reply in thread)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants