Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix-Docker is consuming too much resource (PR for Devlop Branch) #2828

Conversation

PurnenduMIshra129th
Copy link

@PurnenduMIshra129th PurnenduMIshra129th commented Jan 5, 2025

What kind of change does this PR introduce?

bugfix

Issue Number:

Fixes #2730

Did you add tests for your changes?

No

Snapshots/Videos:

Before prod container is not running.
before-ContainerNotRunning

Before container is using total avaiable ram and used full cpu because no resource constraint is there .If we increase load cpu will be used more than 100%.
before-containerUsingSystemResourcesSeeMemorySection

Before Load average is increase rapidly after we start the container and make some I/O task and DB related task.
before-loadAverage
Screenshot (72)

Updated-Now container will use specified Ram and cpu.
updated-containerUsingSystemResources

Before all the files are copied to final image and dockerignore not worked properly.Now only our required files will go to final image.Because root directory is mounted in volume section which causes overwriting of dockerignore.
updated-fileStructureOf Prod

Updated- Image size is reduced as i have used alpline based images for availble services like redis and caddy.Mongo and MinIO is not availble in docker hub.
updated-imageSize

Updated - Now load average will not go rapidly . Because by specifing limit it will reduce memory swapping in low spec devices.
updated-loadAverage

Updated-Prod container is now running.
updated-running-container-prod

If relevant, did you update the documentation?

No

Summary

Summary

This pull request introduces several optimizations and improvements to the Docker build and deployment process for the application. The main objective of this change is to enhance performance, reduce the size of the final Docker image, and improve runtime efficiency.

Key changes include:

  1. Implemented a working multi-stage build process to separate the build and runtime environments. The first stage installs dependencies and builds the application, while the second stage only includes essential files, such as the build folder, package.json, node_modules, and necessary certificates. It is not working previously.
  2. Added resource limits to the Docker Compose configuration to ensure efficient use of memory and CPU.
  3. Ensured better runtime performance by using the production build (npm run prod) instead of the development build and disabling unnecessary watchers.
    4.Now there is single compose file and Dockerfile.Deleted the dokcer-compose.dev.yaml and docker-compose.prod.yaml ,Docker.prod,Docker.dev .
    5.Only mounts the node_modules volumes not required to mount whole root directory.
    6.Changes the command to start the application.
    7.Initially .dockerignore is not working properly it copies all files including dockerignore content.
    Does this PR introduce a breaking change?

No

Other information

1.We have to use npm run prune --production to remove devlopment dependecy .It will reduce aroun 200MB size of our image. But now it is not implemented due some misconfiguration in package.json. After finding which are not used in production we can use it .It is very effective.
2.Currently mongo images size is greater than all of our image around 900MB . which is increasing the image size around 60%. Currently no alpline image is available for mongo.Also it consumes more ram than all of our services but here i have implemented ram constraints so it will not exceed after specifc limit.
3.I have to update the documentation after taking neccessary changes from mentors and CodeRabbits.

Have you read the contributing guide?

Yes

Summary by CodeRabbit

Release Notes

  • Infrastructure Updates

    • Updated Docker configurations for development and production environments.
    • Upgraded Node.js base image in production to version 20.10.0.
    • Optimized Docker build and deployment processes.
  • Dependency Management

    • Added cls-bluebird package.
    • Refined dependency installation scripts.
    • Improved package management strategies.
  • Performance Improvements

    • Adjusted resource limits for services, including MongoDB and Redis.
    • Optimized container configurations.
    • Enhanced service startup and initialization processes.
  • New Services

    • Introduced Caddy service for improved web routing and management.
  • Service Renaming

    • Renamed mongodb to mongo and talawa-api-prod-container to talawa-api-prod in Docker configurations.
  • Script Modifications

    • Updated scripts in package.json for improved functionality and error handling.
  • Test Updates

    • Revised expected values in tests for volunteerRanks to align with new data expectations.

Copy link

coderabbitai bot commented Jan 5, 2025

Walkthrough

This pull request focuses on optimizing Docker configurations and resource utilization for the Talawa API project. The changes span multiple configuration files including Dockerfiles, docker-compose files, and package.json. The primary objectives include improving resource efficiency, updating Node.js versions, and refining the deployment setup across development and production environments. The modifications aim to reduce resource consumption and streamline the containerization process.

Changes

File Change Summary
Dockerfile.dev - Updated base image to node:20.10.0
- Modified COPY instructions to include both package.json and package-lock.json
- Updated comments for clarity
Dockerfile.prod - Updated base image to node:20.10.0
- Expanded file copying in final image stage
- Changed application start command to npm run start
docker-compose.prod.yaml - Renamed services (mongodbmongo, talawa-api-prod-containertalawa-api-prod)
- Added new caddy service
- Updated volume configurations and resource limits
docker-compose.dev.yaml - Updated service images
- Added resource limits for services
- Modified volume mappings
package.json - Updated scripts for minio:check and start:with-minio
- Added cls-bluebird dependency
- Removed some dev dependencies

Assessment against linked issues

Objective Addressed Explanation
Reduce CPU and RAM utilization by 75% [#2730]
Improve Docker performance for easier deployment
Make Talawa API runnable on mid-range laptops

Possibly related issues

Possibly related PRs

Suggested reviewers

  • palisadoes
  • adithyanotfound

Poem

🐰 Docker's dance, a rabbit's delight,
Containers slim, resources tight,
Alpine images, lean and mean,
Efficiency now reigns supreme!
Hop to performance, code so bright! 🚀

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

github-actions bot commented Jan 5, 2025

Our Pull Request Approval Process

Thanks for contributing!

Testing Your Code

Remember, your PRs won't be reviewed until these criteria are met:

  1. We don't merge PRs with poor code quality.
    1. Follow coding best practices such that CodeRabbit.ai approves your PR.
  2. We don't merge PRs with failed tests.
    1. When tests fail, click on the Details link to learn more.
    2. Write sufficient tests for your changes (CodeCov Patch Test). Your testing level must be better than the target threshold of the repository
    3. Tests may fail if you edit sensitive files. Ask to add the ignore-sensitive-files-pr label if the edits are necessary.
  3. We cannot merge PRs with conflicting files. These must be fixed.

Our policies make our code better.

Reviewers

Do not assign reviewers. Our Queue Monitors will review your PR and assign them.
When your PR has been assigned reviewers contact them to get your code reviewed and approved via:

  1. comments in this PR or
  2. our slack channel

Reviewing Your Code

Your reviewer(s) will have the following roles:

  1. arbitrators of future discussions with other contributors about the validity of your changes
  2. point of contact for evaluating the validity of your work
  3. person who verifies matching issues by others that should be closed.
  4. person who gives general guidance in fixing your tests

CONTRIBUTING.md

Read our CONTRIBUTING.md file. Most importantly:

  1. PRs with issues not assigned to you will be closed by the reviewer
  2. Fix the first comment in the PR so that each issue listed automatically closes

Other

  1. 🎯 Please be considerate of our volunteers' time. Contacting the person who assigned the reviewers is not advised unless they ask for your input. Do not @ the person who did the assignment otherwise.
  2. Read the CONTRIBUTING.md file make

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
compose.yaml (1)

114-114: Add newline at end of file

Add a newline character at the end of the file to comply with YAML best practices.

🧰 Tools
🪛 yamllint (1.35.1)

[error] 114-114: no new line character at the end of file

(new-line-at-end-of-file)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3600eaa and ed3d884.

📒 Files selected for processing (5)
  • Dockerfile (1 hunks)
  • Dockerfile.dev (0 hunks)
  • Dockerfile.prod (0 hunks)
  • compose.yaml (7 hunks)
  • docker-compose.prod.yaml (0 hunks)
💤 Files with no reviewable changes (3)
  • Dockerfile.dev
  • Dockerfile.prod
  • docker-compose.prod.yaml
🧰 Additional context used
🪛 yamllint (1.35.1)
compose.yaml

[error] 114-114: no new line character at the end of file

(new-line-at-end-of-file)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Testing Application (22.x)
  • GitHub Check: Analyse Code With CodeQL (typescript)
🔇 Additional comments (3)
Dockerfile (2)

33-37: LGTM! Production configuration looks good

The port exposure and production command configuration align well with the PR's resource optimization objectives.


31-31: Review security implications of copying certs directory

Copying the certs directory into the Docker image might expose sensitive data. Consider:

  1. Mounting certificates as secrets or volumes
  2. Ensuring no sensitive keys are baked into the image

Let's check if the certs directory contains sensitive files:

✅ Verification successful

From the search results, I can see that:

  1. The certs directory exists and is used for SSL certificates
  2. The certificates are generated during development using the generate:ssl-private-key npm script
  3. The certificates are used in src/index.ts for HTTPS server setup
  4. The .gitignore file lists cert.pem, key.pem, and csr.pem

Let me verify the actual certificate files:


Certificate handling is properly secured

The review concern about sensitive certificate data is unwarranted because:

  1. The certs directory only contains .gitkeep and no sensitive files are committed
  2. Certificate files (cert.pem, key.pem, csr.pem) are properly excluded via .gitignore
  3. Development certificates are generated locally via npm script
  4. The Dockerfile copy is safe as it only copies certificates from the build context, which won't contain sensitive files due to .gitignore
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for common sensitive file patterns in certs directory
fd . certs -t f -x file {} | grep -i "key\|private\|secret"

Length of output: 63


Script:

#!/bin/bash
# Let's first check if the certs directory exists and list its contents
fd . -t d -d 1 | grep -i "certs"
fd . certs -t f 2>/dev/null || echo "Directory not found or empty"

# Let's also search for any references to certs in the codebase
rg -i "certs?" -A 2 -B 2

Length of output: 78083


Script:

#!/bin/bash
# List all files in certs directory
fd . certs -t f -x ls -la {}

# Check if any sensitive files are committed
git ls-files certs/

Length of output: 69

compose.yaml (1)

Line range hint 60-84: LGTM! Well-configured API service

The service configuration shows good practices:

  • Appropriate resource limits
  • Proper service dependencies with health checks
  • Optimized volume mounting

Dockerfile Outdated Show resolved Hide resolved
Dockerfile Outdated Show resolved Hide resolved
compose.yaml Outdated Show resolved Hide resolved
compose.yaml Outdated Show resolved Hide resolved
compose.yaml Outdated Show resolved Hide resolved
compose.yaml Outdated Show resolved Hide resolved
Copy link

codecov bot commented Jan 5, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.68%. Comparing base (c0468a4) to head (c21adcc).
Report is 789 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #2828      +/-   ##
===========================================
- Coverage    98.17%   97.68%   -0.50%     
===========================================
  Files          184      364     +180     
  Lines        10767    18563    +7796     
  Branches       835     2677    +1842     
===========================================
+ Hits         10571    18133    +7562     
- Misses         186      425     +239     
+ Partials        10        5       -5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@palisadoes palisadoes added the ignore-sensitive-files-pr The contributor has a legitimate reason for editiing protected files label Jan 5, 2025
@palisadoes
Copy link
Contributor

  1. Make sure Code Rabbit approves your PR
  2. Make sure all the tests pass.
    1. It's important to ensure the updated docker test works.
    2. The sensitive files test will be ignored with your next commit

@PurnenduMIshra129th
Copy link
Author

@palisadoes i am creating two script two dynamically make difference between development dependcy and production dependecy so where i have to place it . will i create a separate folder for this . Some packages are mention in dev but also used in production so in package.json we have move it as production dependicies so then remove the development dependcy. Will i create a separate issue for this.

@PurnenduMIshra129th
Copy link
Author

see these screenshots of production dependecy that are required but listed in dev dependency . So when we run the command npm prune --production then it removes as this command removes devDependency from nodemodules . As a result production enviroment didnot start.Now by the script we can dynamically detect the difference and installed this dependecy as our prodution dependcy so after prune it will not deleted and we can start the server with out any issue. npm prune is neccessary as it optimizes our packages by removing devDependecy by removing unWanted nodeModules that are not required to start the production server.It will be useful to reduce the image size around 200 mb now our talwa-api container size is 516mb so after this it will decrease to around 300-350mb which is requird for this pr.

added_new_command In Package json
requiredDependencyRequiredForProductionButUsedInDevDependency
ThesedependecyRequiredAsProductionButListedAsDevDepend

coderabbitai[bot]
coderabbitai bot previously approved these changes Jan 6, 2025
Copy link
Contributor

@palisadoes palisadoes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The objective was to only have production. Why is dev a part of this at all?
  2. Shouldn't the pull-request workflow be updated to support production tests only?

@PurnenduMIshra129th
Copy link
Author

@palisadoes No it is not related to dev part . For production we have to run npm prune --production this command will delete the nodemodules which are not required for our production to run the app. These nodemodules are only required for our development process . But in some modules listed in development dependcies are used in production enviroment for example cls-bluebird this is listed under dev dependencies . So when we use this command prune it removes this type of package which is required for our production. So for our use case we have to find this type of packages which is required for our production workflow to work . Once we detect these we can add it to our prodution dependecy. So when we use prune command as this command will delete the dev depnedcies so we will not face issue like nodemoules is not found . Some more packages like which is not required for our production examples are "@types/cors": "^2.8.17",
"@types/express": "^4.17.17",
"@types/express-rate-limit": "^6.0.2",
"@types/graphql-depth-limit": "^1.1.6",
"@types/i18n": "^0.13.12",
"@types/inquirer" .............................................. like this . These are not required in our production so if don't remove it will not be used but nodemoudles size increases so it is better to remove them optimize the packages.

@PurnenduMIshra129th
Copy link
Author

PurnenduMIshra129th commented Jan 7, 2025

@palisadoes yes i have open feature for your second question also .So i am waiting for your response what i have to do for this after npm prune run successfully i will update the docker file and i make commit again to this pr . Aftter this pr merged then i will update the documentation . Because these are interRealted now i changed the file to compose.yaml so like if you suggest any different file name the command i will update according to name like (docker compose -f compose.yaml up --build ) if u changed the name it should be (docker compose -f suggestedName.yaml up --build ) like this. And this dependecy optimize will help with if we want to run the server with out docker also in production.

@adithyanotfound
Copy link

@PurnenduMIshra129th

  1. Please replace npm run prod with npm run start:with-minio to use MinIO
  2. If MinIO is not required in production, kindly confirm with someone and, if unnecessary, remove it from the docker-compose file. It will reduce the image size.

@palisadoes
Copy link
Contributor

@PurnenduMIshra129th this is not an area of strength for me. Please work with @adithyanotfound as he figured out the steps to make this work. I'd like to get this merged because it's a good feature for the develop branch

@PurnenduMIshra129th
Copy link
Author

@adithyanotfound ok this is a good idea. I just asked this in slack .

@adithyanotfound
Copy link

@PurnenduMIshra129th I think you should proceed as follows:

  1. In my opinion, MinIO is required by the API in production. Hence please replace npm run prod with npm run start:with-minio in the DockerFile. MinIO can be removed from image if deemed unnecessary in the future.

  2. Currently the MinIO command is broken. The below linked gist fixes this as well as the segregation between dev and prod dependencies. Copy the code from the gist into your package.json and then uncomment the npm prune command in DockerFile. You can work on the scripts later as a separate issue.
    https://gist.github.com/adithyanotfound/e256de87b1bd0fb2bb5dde8351ea5edf

  3. Replace npm prune --production with npm prune --omit=dev since --production is depreciated.

  4. Fix the failing Docker Check PR workflow. Also add npm prune command to test production build workflow.

@PurnenduMIshra129th
Copy link
Author

@adithyanotfound ok will proceed your suggestion

Copy link
Member

@varshith257 varshith257 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment #2730 (comment)

@varshith257
Copy link
Member

varshith257 commented Jan 11, 2025

Using a single Dockerfile for both dev and prod feels like trying to teach a bird to swim it might work, but it’s definitely not what it was made for. Dev and prod each have their own unique needs.

@PurnenduMIshra129th
Copy link
Author

@varshith257 do you want both dev and prod images ? Then i create it no problem.

@varshith257
Copy link
Member

Yes keep them

@PurnenduMIshra129th
Copy link
Author

@palisadoes @varshith257 previously i have some work out there so i am not active past 1-2 days . wIll make this issue fixed by today so that it can be merged.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
docker-compose.dev.yaml (1)

116-116: Add newline at end of file.

Add a newline character at the end of the file to comply with YAML best practices.

   talawa-network:
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 116-116: no new line character at the end of file

(new-line-at-end-of-file)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b64413c and c21adcc.

📒 Files selected for processing (3)
  • docker-compose.dev.yaml (7 hunks)
  • package.json (3 hunks)
  • tests/resolvers/Query/getVolunteerRanks.spec.ts (4 hunks)
🧰 Additional context used
📓 Learnings (1)
docker-compose.dev.yaml (2)
Learnt from: PurnenduMIshra129th
PR: PalisadoesFoundation/talawa-api#2828
File: docker-compose.dev.yaml:29-30
Timestamp: 2025-01-19T21:07:15.642Z
Learning: In development Docker environments for Talawa API, Redis port (6379) should remain accessible to other services without localhost binding to allow proper inter-service communication and debugging.
Learnt from: PurnenduMIshra129th
PR: PalisadoesFoundation/talawa-api#2828
File: docker-compose.dev.yaml:18-19
Timestamp: 2025-01-19T21:06:57.200Z
Learning: In docker-compose.dev.yaml, MongoDB port is bound to all interfaces (27017:27017) instead of localhost (127.0.0.1:27017:27017) to avoid port conflicts in development environment and enable container-to-container communication.
🪛 YAMLlint (1.35.1)
docker-compose.dev.yaml

[error] 116-116: no new line character at the end of file

(new-line-at-end-of-file)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Testing Application (22.x)
🔇 Additional comments (11)
tests/resolvers/Query/getVolunteerRanks.spec.ts (3)

40-45: LGTM! Test expectations updated correctly.

The updated test expectations for volunteer ranks are consistent with the implementation.


61-64: LGTM! Weekly volunteer rank test is accurate.

The test case correctly verifies the weekly volunteer hours calculation.


81-83: LGTM! Monthly and yearly volunteer rank tests are consistent.

Both monthly and yearly test cases correctly expect 6 hours for the top volunteer.

Also applies to: 98-100

docker-compose.dev.yaml (5)

14-17: Resource limits for MongoDB look appropriate.

The 1G memory and 1.0 CPU limits are suitable for development environment considering the complex data model with 40+ schemas and resource-intensive operations.


29-36: Good choice of Redis image and resource limits.

Using the Alpine-based Redis image reduces the container size. The resource limits (512M memory, 0.5 CPU) are appropriate for development.


52-56: Resource limits for MinIO are well-balanced.

The 512M memory and 0.5 CPU limits are suitable for development usage of MinIO.


82-86: Appropriate resource allocation for the API service.

The 1G memory and 1.0 CPU limits for the API service are well-balanced for development needs.


102-106: Good resource optimization for Caddy.

The lightweight limits (256M memory, 0.25 CPU) are appropriate for the reverse proxy service in development.

package.json (3)

10-12: Improved MinIO check and startup process.

The updated scripts properly handle MinIO initialization by ensuring the build is complete before checking MinIO installation.


25-25: Robust prepare script with fallback.

The prepare script now gracefully handles cases where husky is not installed, which is beneficial for Docker builds.


106-108: Verify CLS package compatibility.

The addition of both cls-bluebird and cls-hooked might lead to conflicts. Ensure they are compatible and necessary.

✅ Verification successful

Both CLS packages are compatible and properly used

The packages serve complementary purposes:

  • cls-hooked manages the CLS namespaces
  • cls-bluebird provides Bluebird promise integration with CLS

The codebase correctly initializes and uses both packages without conflicts.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential conflicts or issues with CLS packages
echo "=== Checking for CLS usage patterns ==="
rg "cls-bluebird|cls-hooked" -A 5

echo -e "\n=== Checking package compatibility ==="
npm info cls-bluebird peerDependencies
npm info cls-hooked peerDependencies

Length of output: 5024

@PurnenduMIshra129th
Copy link
Author

@palisadoes @varshith257 review once . I have fixed this test error

@palisadoes
Copy link
Contributor

palisadoes commented Jan 28, 2025

@varshith257

  1. PTAL
  2. We need this for the demo site

@varshith257
Copy link
Member

@palisadoes I think we need some benchmark results like pulling changes from this PR vs upstream.

Like in optimizations it will actually helps of benchmarking them to know what optimised of now and don't result in error prone before we merge.

Changes looks good but I need benchmark results of docker image sizes of both dev and prod and CPU Utilisation (which is main reason behind this PR)

@palisadoes
Copy link
Contributor

palisadoes commented Jan 30, 2025

@varshith257

  • What benchmarking system do you recommend?

@PurnenduMIshra129th

  • Do you have any suggestions?

@varshith257
Copy link
Member

@PurnenduMIshra129th
Copy link
Author

@palisadoes I think we need some benchmark results like pulling changes from this PR vs upstream.

Like in optimizations it will actually helps of benchmarking them to know what optimised of now and don't result in error prone before we merge.

Changes looks good but I need benchmark results of docker image sizes of both dev and prod and CPU Utilisation (which is main reason behind this PR)

@varshith257 @palisadoes now we won't see any difference between docker dev and prod image size .It sizes is now similar to upstream . As we are reverting back the use of slim node images . In prod after using that npm prune in my case image size is reduced around 200MB means initially (before 516MB and after 300MB) for dev images size after using slim node image it is reduced like 600MB(initially 1.3 GB or its equivalent after 556MB). But now we are not using due to stability as by your suggestion .But difference we can see in resource utilization.Also in development image we can't use multistage build as it is not required.

@PurnenduMIshra129th
Copy link
Author

@palisadoes @varshith257 i have used one tool for load test and after giving load on server observing the performance on linux using top command . link to the package https://www.npmjs.com/package/artillery . Give a read about the package. It is very useful and we can set limit how many virtual user we want to create and how much time he will make request to server and what should be the rate of request per second . And when the load test ends it will give the server performance like how much request is success and how much is failure.We can define separate graphql queries and shuffel them to create real world scenerio.

@varshith257
Copy link
Member

Yes we never have reached for load testing as focused on dev env till today. As we are deploying on Cloud Instance soon it may good for load testing

@palisadoes
Copy link
Contributor

Please load test before we merge this

@PurnenduMIshra129th
Copy link
Author

@palisadoes are u telling me to share the load test configuration file here

@palisadoes
Copy link
Contributor

The results

@PurnenduMIshra129th
Copy link
Author

@palisadoes ok i will share

@PurnenduMIshra129th
Copy link
Author

PurnenduMIshra129th commented Jan 30, 2025

@palisadoes @varshith257 this is the configuration file of my configuration file
load-test.txt

steps are pretty easy for this

-download the file
-change the format to ( txt to yml ) like this load-test.yml
-install the npm i artillery -g
-then run this command ( artillery run load-test.yml )
-Now you are ready to go
-Analyize the result of success and failure for both the cases .
-simulataneouly check the performance with top or ps by your choice at the time of running artillery .
-Check for both the cases with docker and with out docker case.
-If in docker you are seeing over uses of cpu or ram then just reduce the ram and cpu in docker-compose file.
-analyzie the docker running container by using (docker stats)
-we will get a pretty good idea about for both cases .
-If you want to define any other query also we can define in its yml file . It will dynamically switch between queries to create complex real world scenerio.

Any way this is the results @palisadoes
image
image
image
image

@palisadoes
Copy link
Contributor

@varshith257 Are these stats acceptable?

@varshith257
Copy link
Member

varshith257 commented Jan 31, 2025

@palisadoes The results indicates severe performance issues in the login API. See the results only 3 out of 30 requests were successful and even most requests timed out (ETIMEDOUT) and API response time is too high (~8-9s) leading to server overload maybe database queries or concurrency limits or API inefficiencies are likely cause this type of behaviours

And don't know what's it ran against dev or prod env?

We need to always load test and benchmark in every migrations we do so that we can catch early the issues and fix them

@PurnenduMIshra129th Can you open an issue with this results and also make sure this results are against prod env
In dev env we can't do any as it's a experimental lab

Copy link
Member

@varshith257 varshith257 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The results are related of our API performance issue and docker looks good of reduced image size and resource utilisation.

LGTM

@palisadoes
Copy link
Contributor

@varshith257

  1. Are you saying that docker resources are OK, but that the app performance is poor, therefore we need to refactor the develop code
  2. Does it make sense to test performance as part of a GitHub action prior to approving PRs? We could potentially confirm or quotab of compute if we did that.
  3. How do other projects handle this?

@varshith257
Copy link
Member

varshith257 commented Jan 31, 2025

@palisadoes If we consider whole this API as a mono repo instance, yes we can introduce a workflow for running the load test of @PurnenduMIshra129th done makes sense which can catch early of which PR introducing them once we fixed the issues in the develop branch. But note that we need to benchmark on prod env and don;t know is this load test done on it. If not we load test there and need to fix this issues and introduce the workflow what we mentioning. Every app user can expect a smooth app running so our API Performance should be top notch without compromising the performance issue and there are several optimisation methods todo so

Are you saying that docker resources are OK, but that the app performance is poor, therefore we need to refactor the develop code

Yes

How do other projects handle this?

In MVP stage every project load test their app/website server performance and fix them before launching

Btw I don't know exact what we are in but last year we though atleast by summer 2025 we have a MVP with beta version. Is it on the track or any deviations ?

@PurnenduMIshra129th
Copy link
Author

@palisadoes @varshith257 i have tested this against dev enviroment . Yes i can open an issue for it.

@palisadoes palisadoes merged commit 8ab6bdd into PalisadoesFoundation:develop Jan 31, 2025
16 of 17 checks passed
PurnenduMIshra129th added a commit to PurnenduMIshra129th/talawa-api that referenced this pull request Jan 31, 2025
…alisadoesFoundation#2828)

* some changs

* updated docker.prod

* updated Dockerfile.prod and dev

* created single compose file for prod and removed development dockercompose

* merged branch with upstream

* again added the docke-compose.dev.yaml , docker-compose.prod.yaml,Dockerfile.dev,Dockerfile.prod

* changed package.json, changed start command with minio,commented the script for analyze depenency

* changed according to suggestion

* changed the jq installation to debian syntax

* changed command of minio start

* updated script of minio and removed unnecessary imports in 2nd stage of image and some suggestion

* suggestion from ai

* Update docker-compose.dev.yaml line 19

Co-authored-by: Vamshi Maskuri <[email protected]>

* Update docker-compose.dev.yaml line 35

Co-authored-by: Vamshi Maskuri <[email protected]>

* Update docker-compose.dev.yaml line55

Co-authored-by: Vamshi Maskuri <[email protected]>

* Update docker-compose.prod.yaml line 16

Co-authored-by: Vamshi Maskuri <[email protected]>

* Update docker-compose.prod.yaml

line 34

Co-authored-by: Vamshi Maskuri <[email protected]>

* Update docker-compose.prod.yaml

line 54

Co-authored-by: Vamshi Maskuri <[email protected]>

* Update docker-compose.prod.yaml

line 84

Co-authored-by: Vamshi Maskuri <[email protected]>

* updated by suggestion

* updated dockerFiles

* added npm install in prod

* modify test expectation for getVolunteerRank.spec.ts componennt

---------

Co-authored-by: Vamshi Maskuri <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ignore-sensitive-files-pr The contributor has a legitimate reason for editiing protected files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants