Skip to content

fix: CXSPA-10184 Punchout Inspect cart page header too wide #20331

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from

Conversation

Pucek9
Copy link
Contributor

@Pucek9 Pucek9 commented May 16, 2025

Closes CXSPA-10184

anjana-bl and others added 6 commits May 16, 2025 15:37
…o peerDeps update) (#20309)

This PR bumps the globally **installed** version of rxjs in our repo from 7.8.1 to 7.8.2.
However we keep the minimal requirements for customers on the level `^7.8.0`.

Unfortunately the installed version of `@angular-devkit/schematics/testing` has an internalized copy of rxjs in `node_modules/@angular-devkit/schematics/node_modules/rxjs` which still has version 7.8.1.

The typings between [email protected] and [email protected] are a bit incompatible, which caused typing issues in unit tests of schematics, when calling the function `firstValueFrom` imported from the globally installed `[email protected]` , called with an argument  `schematicRunner` imported from `@angular-devkit/schematics/testing` (which is based on the internalized version of rxjs `node_modules/@angular-devkit/schematics/node_modules/[email protected]`).

I've tried to bump just `@angular-devkit/schematics` to a higher version (so it starts using a higer internalized version of rxjs 7.8.2, but not 7.8.1). I also tried upgrading it all other Angular libs at once (to the latest patch 19.0.7 from the current 19.0.4), but had problem to proceed with it without peerDependencies errors. 
Later on I've checked in aside "empty test npm repo" that @angular-devkit/schematics 19.0.7 (highest patch on 19.0.x version line) uses [email protected] - so even bumping it in our repo to 19.0.7 wouldn't help

So I decided to use the following workaround:
In those unit tests which used the rxjs function `firstValueFrom`, we're importing the function `firstValueFrom` not from the globally installed `[email protected]`, but from th internalized version of rxjs inside `@angular-devkit/schematics/node_modules/[email protected]` to be type-compatible with the `schematicRunner` from `@angular-devkit/schematics/testing`.

Please note the workaround is needed only in unit tests, not in the source code of our libs. The public API that we ship to our customers didn't change.

This workaround can be removed in the future, when we'll bump `@angular-devkit/schematics` (which will happen at latest during our yearly February Framework Update Release, when we update all Angular packages).

fixes https://jira.tools.sap/browse/CXSPA-10172
…ber of cache entries

**Context**
------------

Each NodeJS pod has some available memory (`max_memory_restart` option in the `pm2` process manager) and when the total process memory exceeds this limit, the process is restarted. Ideally we'd like to avoid such situations, becasue restarts disrupt the continual responsiveness of the SSR server.

Angular in SSR mode needs a lot of "working memory" for the rendering for all incoming requests (e.g. creating the DOM structure in-memory of NodeJS, storing temporarily big responses from backend endpoints, creating the state of the whole Angular application). When the rendering is done, Angular destroys its application instance, and after a moment NodeJS decides to free up this memory with the Garbage Collector. But it's important to notice that temporarily during the renderings high peaks of "working memory" is needed, but ideally it should not exceed `max_memory_restart` limit of `pm2` process manager).

Besides that, some memory is needed for the in-memory cache of rendered HTML strings, stored in Spartacus `OptimizedSsrEngine`, or to be precise, stored in Spartacus `RenderingCache` class.
Currently this cache size could be limited only by the **number of entries** via the `cacheSize` option. It was not very precise method of limiting cache, becasue each HTML can have different size and it's not possible to predict in advance surely the actual space taken by all those cache entries. In other words, you could only assume in advance how much in average take one HTML cache entry and then calculate how many such average entries you contain inside a NodeJS process, so it doesn't restart becasue a memory limit. And a few exceptionally big HTML cache entries (big compared to an average) in practice could lead to crossing the memory limit for the cache, and together with high peaks of "working memory" needed for actual renderings, a process could be restarted due to exceeding the `max_memory_restart` limit. 

Although it's difficult to predict "working memory" for rendering each page (it's possible only with experimentation), the memory allocated for the string cache entries, should be predictable. So it would be much better and bullet-proof to limit the cache size in actual memory units (like `800 MB`) but not arbitrary cache entries number (like `3000 entries`).

**New feature**
-----------------

Because the total available memory for the NodeJS process is defined in memory units (e.g. 3GB) in the configuration of the NodeJS pod, it would be good to also define a hard precise limit of the cache size defined in memory units. This PR introduces such a new config option `cacheSizeMemory` (and deprecates the old `cacheSize` property), while introducing the new ssr feature toggle `ssrFeatureToggles.limitCacheByMemory`. And this PR sets the default value to `800MB` (which takes effect only when the new feature toggle is enabled).

A new class `RenderingCacheSizeManager` is introduced in this PR, which constantly tracks the summed size of each cache entry, ensuring the configured max `cacheSizeMemory` won't be exceeded. When attempting to add new entry that would exceed this limit, we keep removing oldest entries, until there is enough free memory for the new entry to be stored.

With experimentation, we measured that each character of the cached HTML string takes 2 bytes per character (more on this topic in the next section). That said, we exposed a new SSR option `cacheEntrySizeCalculator` which allows for customizing the strategy how the size of a cache entry is calculated. By default it assumes 2 bytes for each character of the HTML string. Theoretically it's possible to cache also error objects (which is not recommended in our docs), but for completeness our default calculator roughly approximates the size of the error, by summing up its 3 string properties: name, message, trace, which is not ideal and prone to under-estimation, especially when the error object has much more properties or even is not an instance of an Error object (theoretically `cacheEntry.err` has type `any`). For most customers who don't cache errors (as recommended), the default `cacheEntrySizeCalculator` should suffice. But for customers who - due to some reasons - deliberately want to cache some error objects, we allow them to customize the non-ideal default logic of calculating the size of the cached errors.

Future plans: in another PR I plan to move all files related to the rendering cache to the relative subfolder `/rendering-cache` (when I tried to do it initially in this PR, then `git diff` treated some files as removed/created instead of edited+moved, which caused harder review of changes).

**How "2 bytes per character" estimation was determined**
-------------------------------------------------------------
NodeJS V8 engine can represent strings with various data structures. Some of them (like `SeqOneByteString`) allocates 1 byte per character (plus a few bytes of overhead). Other structures (like `SeqTwoByteString`) allocate 2 bytes per character (plus a few bytes of overhead). In practice a single HTML string of OOTB Spartacus Homepage _in prod mode_ takes now in V8 ~520kB of "Allocated Size", so the overhead of _few bytes_ per string can be ignored in calculations. Instead, we can just assume some general additional safety margin, e.g. additional free 50 MB "just in case".

For Spartacus SSR the HTML strings in cache entries were consistently represented with two-byte-per-character representations (`SeqTwoByteString`) - it was verified through heap snapshots and memory measurements. See the example screenshot of NodeJS DevTools inspecting the Heap Snapshot for the string with HTML using `SeqTwoByteString`:

<img width="1789" alt="image" src="https://github.com/user-attachments/assets/3c488e22-9cc8-4152-ad82-ddc9b6894186" />

The hypothesis of using "2 byte per character" was confirmed also by comparing the "Allocated Size" reported in NodeJS for a specific string (512kB) against the expression `2 * string.length`, where `string.length` is the number of characters in the string. Various experiments gave evidences that the expression `2 * string.length` described accurately the "Allocated Size" reported by NodeJS DevTools. The experiments included our HTMLs of OOTB Homepage (512kB per HTML string), custom-made-longer HTMLs of OOTB Homepage (1.2M per HTML string) or big strings having only complex characters of UTF8 like `ąŌ∑ę∂™ŘŽĘŽ` repeated 208 times (so total length was 2080 chars, which gives 4160 expected Bytes of "Allocated Size", assuming 2 bytes per character, which matches the measured 4.2kB measurement screenshot below):

<img width="1789" alt="image" src="https://github.com/user-attachments/assets/84843c1f-ea43-498f-af47-dcab2c9d61e9" />

(note: NodeJS dev tools use the SI standard of memory units based on powers of 10, not powers of 2, so `800 MB` equals `800 * 1_000_000`)

I've made above measurements using `--inspect-brk` argument, connecting to the process via remote NodeJS debugger in Chrome and taking `Heap Snapshot`s. I'm not aware of any method of inspecting how much Heap Size V8 engine allocates in practice for each string, **from within the running NodeJS process**. In other words, I can see not option of Spartacus SSR inspecting accurately in runtime how much memory take each entry. So it's only by prior experimentation and investigating Heap Snapshots to understand how much memory is allocated for each HTML string in cache entries. And later on this knowledge was implemented in the logic of Spartacus `DefaultCacheEntrySizeCalculator` (`2* str.length`) to estimate and control the size of each string in cache entry in runtime.

**1GB of "working memory" is needed for 10 parallel renderings of OOTB Homepage**
--------------------------------------------------------------------------------------
To be precise, I've noticed that on my machine ~500MB of "working memory" is needed for rendering 10 parallel requests for OOTB Homepage as of today (note: in the experiment each parallel request for the Homepage had a distinct query param, to trigger separate renderings, even though all requests were for the Homepage). I logged `process.memoryUsage()` to CSV file every 10ms, and then from that data I observed the peak memory usage at the level of ~500MB (by total memory I mean Resident Set Size; not just Heap Size that is a subset of Resident Set Size).
For that experiment, I've ensured no CSR fallbacks happen and disabled cache (so the total memory doesn't grow due to cache size increase). See the "rss" (Resident Set Size) total process memory in the chart below (basd on the data in CSV):

![image](https://github.com/user-attachments/assets/1c10c583-de67-4ca0-92c8-94a65528e171)

Even though the observed total memory usage was ~500MB, to be on the safe side let's add a margin of yet another 500MB to it - becasue each customer's custom page can have different size than our OOTB Homepage, the data fetched from backend and stored temporarily in-memory can be different (bigger) and Garbage Collector might kick in after a delay (so unused memory is not unallocated immediately causing a temporary bigger `Heap Size` and `Resident Set Size` until GC kicks in). So let's guess-assume 1GB is needed for withstanding 10 parallel renderings.

This is just a general estimation, but each customer might need to do their own estimation to their specific needs.
The amount of "working memory" needed for rendering each custom page might be different. Moreover, customers can adjust the `concurrency` limit option of `OptimizedSsrEngine` (currently allowing for at most 10 parallel renderings by default).

Moreover, the virtual machine can be possibly scaled up to allow for more total available memory for the NodeJS process and then the `--max-memory-restart` limit of `pm2` process manager might be increased.

**Why we're setting the default `cacheSizeMemory` to 800MB**
--------------------------------------------------------
Assuming the `--max-memory-restart` config of `pm2` process manager set to `1.8GB` (`3GB * 0.6 max_memory_restart_factor = 1.8GB`) and assuming we should leave 1GB for "working memory" (as explained above) for ongoing renderings for incoming requests, then at most 800MB can be utilzed for the cache. That said, each customers may want to tweak this value to their specific needs. 

Note: As of today, the default NodeJS pod in CCV2 can have 3GB of available system memory and the default `max_memory_restart_factor` configuration in CCV2 is `x0.6` which translates to `1.8GB` for the actual value of the `--max-memory-restart`. That said, in the future those default limits might be up to change.

To give you a general feeling how many average cache entries can fit into 800MB limit, let's assume average to be ~520kB of "Allocated Size" in V8 (it's the "Allocated Size" for a single OOTB Homepage string HTML in prod mode, as of today) and let's assume 800MB of total `cacheSizeMemory` (note: I'm using SI standard of memory units based on powers of 10, not powers of 2). Then it gives ~1538 entres = 800 000 000 Bytes / 520 000 Bytes-Per-Entry. That said, each page can be different, so it's just to give you a general feeling.

**Why bother setting any default `cacheSizeMemory`, as enabling cache is not recommended anyway?**
-----------------------------------------------
Spartacus OptimizedSsrEngine needs some space for the implicit in-memory cache of HTML strings that finished rendering after their original requests timed out. That cached HTML entries are reused for the next incoming request for this url, to not waste that rendering result. Then such cache entry is evicted. This takes place, even when `cache: false` is configured (which is a recommended value). If customer enables `cache: true`, then we need cache size limit anyway too.

fixes https://jira.tools.sap/browse/CXSPA-8964

Co-authored-by: kpawelczak <[email protected]>
Co-authored-by: Krzysztof <[email protected]>
@Pucek9 Pucek9 requested a review from a team as a code owner May 16, 2025 22:50
@github-actions github-actions bot marked this pull request as draft May 16, 2025 22:50
FollowTheFlo
FollowTheFlo previously approved these changes May 17, 2025
Copy link
Contributor

@FollowTheFlo FollowTheFlo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@Pucek9 Pucek9 marked this pull request as ready for review May 17, 2025 00:31
Copy link

cypress bot commented May 17, 2025

spartacus    Run #48297

Run Properties:  status check passed Passed #48297  •  git commit 91e67c1324 ℹ️: Merge 4abc0d10370b855207d0aec5cf32718ee558f37d into 130c72b93029c0e68081c31128fb...
Project spartacus
Branch Review fix/CXSPA-10184
Run status status check passed Passed #48297
Run duration 05m 26s
Commit git commit 91e67c1324 ℹ️: Merge 4abc0d10370b855207d0aec5cf32718ee558f37d into 130c72b93029c0e68081c31128fb...
Committer Michał Gruca
View all properties for this run ↗︎

Test results
Tests that failed  Failures 0
Tests that were flaky  Flaky 3
Tests that did not run due to a developer annotating a test with .skip  Pending 2
Tests that did not run due to a failure in a mocha hook  Skipped 0
Tests that passed  Passing 236
View all changes introduced in this branch ↗︎

@Pucek9 Pucek9 changed the base branch from develop to release/2211.41.x May 19, 2025 12:39
@Pucek9 Pucek9 dismissed FollowTheFlo’s stale review May 19, 2025 12:39

The base branch was changed.

@github-actions github-actions bot marked this pull request as draft May 19, 2025 13:45
@Pucek9 Pucek9 closed this May 19, 2025
@Pucek9 Pucek9 deleted the fix/CXSPA-10184 branch May 19, 2025 13:50
@Pucek9
Copy link
Contributor Author

Pucek9 commented May 19, 2025

Replaced by #20337

Comment on lines +22 to +24
$padding: max(1.5rem, calc((100vw - var(--cx-page-width-max)) / 2));
padding-inline-start: $padding;
padding-inline-end: $padding;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this would be the same:

padding-inline: max(1.5rem, calc((100vw - var(--cx-page-width-max)) / 2));

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants