New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: How to solve N+1 problem with Rejoiner #68
Comments
xiaomeiwen
changed the title
Question: SchemaModification
Question: SchemaModification time issue
Jan 25, 2019
xiaomeiwen
changed the title
Question: SchemaModification time issue
Question: How to solve N+1 problem with Rejoiner
Jan 25, 2019
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I recently face a performance issue in a multi-level query using Rejoiner.
Here's an example similar to my data structure:
Assume each level is 1-many.
I have defined a set of gRPC APIs to get details at each level, and all the IDs are globally unique (e.g. team 123 only belongs to a specific department, which belongs to a specific company):
I then created a query called
companyEmployees(companyId)
which will use Rejoiner'sSchemaModification
to handle nested query based on the data structure. It goes like this:I added logs to trace the
companyDepartments
query and how it triggersSchemaModification
. In my DB, I have 1 company, 2 departments, each department contains 2 teams, and each team contains 5 members. And I found out the following behavior:It get the company first -> call SchemaModification to get first department -> call SchemaModification to get first team in first department ->...
It looks like a DFS, and the query time becomes linear, based on how much data is there in DB.
Is there any approach I can change my query time in to a constant time, let those queries happen parallel?
Facebook use Dataloader to solve this n+1 prob, I wonder how to solve it with rejoiner.
https://engineering.shopify.com/blogs/engineering/solving-the-n-1-problem-for-graphql-through-batching
The text was updated successfully, but these errors were encountered: