You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When uploading multiple files (approximately 50 ) to a table via the API or manually, the upload process starts failing with HTTPConnectionPool(host='127.0.0.1', port=3000): Read timed out. (read timeout=10) errors. This issue occurs consistently after processing around 50 files, regardless of whether the uploads are performed via a script or manually. The server appears to handle the initial uploads successfully but begins timing out on subsequent requests.
Config for the upload
Using s3 uploads
the file is uploaded to the bucket successfully , i can find it when i browse data
Screenshots
2025-03-10 05:19:26,471 - ERROR - Failed to upload file for record recbhV4sVwe5zvLAdvO: HTTPConnectionPool(host='127.0.0.1', port=3000): Read timed out. (read timeout=10)
The issue persists even with manual uploads, suggesting a server-side limitation rather than a client-side script problem.
The backend code (AttachmentsService) and thresholdConfig do not impose an explicit limit on the number of uploads, only file size limits (maxAttachmentUploadSize and maxOpenapiAttachmentUploadSize, both set to Infinity by default).
Logs indicate successful record creation but fail during attachment upload with a 10-second timeout (set in the script and possibly reflected in server behavior).
Server logs or configuration details (e.g., Prisma connection pool, NestJS timeout settings) might provide further insight.
Example log snippet: 2025-03-10 05:19:26,471 - ERROR - Failed to upload file for record recbhV4sVwe5zvLAdvO: HTTPConnectionPool(host='127.0.0.1', port=3000): Read timed out. (read timeout=10)
The text was updated successfully, but these errors were encountered:
abdelali-hamza
changed the title
Upload files failing after approximately 35 files
Upload files failing after approximately 45 files
Mar 10, 2025
abdelali-hamza
changed the title
Upload files failing after approximately 45 files
Upload files failing after approximately 50 files
Mar 10, 2025
Describe the bug
When uploading multiple files (approximately 50 ) to a table via the API or manually, the upload process starts failing with HTTPConnectionPool(host='127.0.0.1', port=3000): Read timed out. (read timeout=10) errors. This issue occurs consistently after processing around 50 files, regardless of whether the uploads are performed via a script or manually. The server appears to handle the initial uploads successfully but begins timing out on subsequent requests.
Config for the upload
Using s3 uploads
the file is uploaded to the bucket successfully , i can find it when i browse data
Screenshots
2025-03-10 05:19:26,471 - ERROR - Failed to upload file for record recbhV4sVwe5zvLAdvO: HTTPConnectionPool(host='127.0.0.1', port=3000): Read timed out. (read timeout=10)
logs of the container :
when i try to upload , it stays in this step
it seems like the request http://localhost:3000/api/attachments/notify/QbzzQEJQIYNr?filename=C_Diapo.pdf is stuck
Temporary Solution
docker compose down
docker compose up -d
Additional context
The issue persists even with manual uploads, suggesting a server-side limitation rather than a client-side script problem.
The backend code (AttachmentsService) and thresholdConfig do not impose an explicit limit on the number of uploads, only file size limits (maxAttachmentUploadSize and maxOpenapiAttachmentUploadSize, both set to Infinity by default).
Logs indicate successful record creation but fail during attachment upload with a 10-second timeout (set in the script and possibly reflected in server behavior).
Server logs or configuration details (e.g., Prisma connection pool, NestJS timeout settings) might provide further insight.
Example log snippet:
2025-03-10 05:19:26,471 - ERROR - Failed to upload file for record recbhV4sVwe5zvLAdvO: HTTPConnectionPool(host='127.0.0.1', port=3000): Read timed out. (read timeout=10)
The text was updated successfully, but these errors were encountered: