You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The current implementation of db.insert_multiple(points) takes approximately 2 minutes to execute when handling a large number of points. This duration significantly impacts performance and scalability, especially in scenarios requiring frequent batch insertions.
Describe the solution you'd like
I would like to optimize the insert_multiple method to reduce its execution time. Possible solutions could include:
Implementing multithreading or multiprocessing to parallelize the insertion process.
Adding batch processing logic to divide the points into smaller, more manageable chunks that can be processed concurrently.
Leveraging database-specific bulk insertion features to improve performance.
Describe alternatives you've considered
Using a single-threaded approach with smaller batch sizes. However, this still does not fully utilize available resources, and performance gains are marginal.
Splitting the workload manually outside the function, but this introduces additional complexity and redundancy in code.
Additional context
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The current implementation of db.insert_multiple(points) takes approximately 2 minutes to execute when handling a large number of points. This duration significantly impacts performance and scalability, especially in scenarios requiring frequent batch insertions.
Describe the solution you'd like
I would like to optimize the insert_multiple method to reduce its execution time. Possible solutions could include:
Implementing multithreading or multiprocessing to parallelize the insertion process.
Adding batch processing logic to divide the points into smaller, more manageable chunks that can be processed concurrently.
Leveraging database-specific bulk insertion features to improve performance.
Describe alternatives you've considered
Using a single-threaded approach with smaller batch sizes. However, this still does not fully utilize available resources, and performance gains are marginal.
Splitting the workload manually outside the function, but this introduces additional complexity and redundancy in code.
Additional context
The text was updated successfully, but these errors were encountered: