This repository has been archived by the owner on Oct 1, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
I tried to optimise the twister itself, or rather, caching the seeded states of it. This has good speed ups.
WIP PR
In this implementation we hit the cache often within a single test, and I think over a test suite with lots of repeated fills of the same data, we'd see good speed ups too. The cache for the pre-seeded states are hit quite often, I think because of the seed collisions identified in this issue.
This needs the latest version of faker though, as we are then able to instantiate the faker instance with a custom RNG, which is what I did here. This is a copy of the twister in faker itself, with a modification to cache the state of the twister after each seeding.
This update is breaking though, because of the bump to faker, the random values are not guaranteed to be the same. In admin tests, we are (unfortunately, and mistakenly) relying on the random data generated by the filler.
I believe relying on the random data returned by the filler is a bad practice as it blocks any changes to the generation of the random data that we make here, and if we assert on that random data in the tests, it's not clear in where that data is coming from.
So actually I question the value of the randomness of the filler in general. We can fill with valid data for the document without generating it randomly. It adds a lot of overhead and IMO not much value.
More discussion about if we should use faker here #2161