-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SqlAlchemy handling of firebird stream blobs (a.k.a. BlobReader objects) #58
Comments
It sounds like with fdb you can still read the BlobReader objects after the cursor is closed, but with firebird_driver the BlobReader objects are closed along with the cursor and can no longer be read. Is that right?
A possible workaround is to make sure to have read everything you need before to cursor is closed (e.g., with .all() or .first()) before trying to process the result set.
I’ll add reviewing the cursor closing behavior to the list of things to do.
Thanks,
Paul
From: Bryan Cole ***@***.***>
Sent: Tuesday, October 10, 2023 9:02 AM
To: pauldex/sqlalchemy-firebird ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [pauldex/sqlalchemy-firebird] SqlAlchemy handling of firebird stream blobs (a.k.a. BlobReader objects) (Issue #58)
I'm having a problem reading large blobs from a firebird database, using the new firebird-driver.
For large blobs, the firebird driver (both new and old) returns BlobReader objects, rather than the fully materialised python bytes-objects. These BlobReader objects are file-like and can be read to obtain the binary data. The thing that has changed in the new firebird driver is that the Cursor.close() method now closes all BlobReader objects associated with that cursor. Unfortunately, when sqlalchemy executes a statement returning data (i.e. calls fetchXXX() on the cursor), it always closes the cursor after iterating over it, but before accessing any of the data in the returned rows. Hence, later on when the data is passed to the Dialect TypeDecorator to handle type-conversions, the cursor has been closed so all the BlobReader objects are closed, so reading the data fails.
Although I can't see a way to fix this in the Dialect, I'm wondering if you have any ideas, before I look at modifying the sqlalchemy core to add some sort of hook to customise Cursor closing behaviour.
The author of the new firebird driver seems adamant that the BlobReaders ought to be closed when the Cursor is closed.
—
Reply to this email directly, view it on GitHub <#58> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABX7Z67JWG5EHUMGAJRIPG3X6VWPTAVCNFSM6AAAAAA52TFJRSVHI2DSMVQWIX3LMV43ASLTON2WKOZRHEZTKNZQGYZTGOI> .
You are receiving this because you are subscribed to this thread. <https://github.com/notifications/beacon/ABX7Z63SE4XT4VPKEODP5Z3X6VWPTA5CNFSM6AAAAAA52TFJRSWGG33NNVSW45C7OR4XAZNFJFZXG5LFVJRW63LNMVXHIX3JMTHHGYEI4M.gif> Message ID: ***@***.*** ***@***.***> >
|
Thanks for your response. Yes, that's right ( fdb driver doesn't close BlobReaders on Cursor.close(), firebird-driver does). Sadly, calling .all() or .first() doesn't help. These calls retrieve all data for the query by iterating over the cursor and collecting the data into a list. This happens in the This is giving me an idea, that maybe I can subclass the I have worked around the problem in the short term by setting the |
@bryancole did you make any progress about this issue? I cannot work on this now, but I think is appropriate to cite this comment from SQLAchemy maintainer. |
Related discussion from |
Sorry, I did not. I worked around by forcing the stream-blob threshold to
be very high, so all blobs are materialised in my application.
Bryan
…On Tue, Dec 19, 2023 at 5:15 PM F.D.Castel ***@***.***> wrote:
@bryancole <https://github.com/bryancole> did you make any progress about
this issue?
I cannot work on this now, but I think is appropriate to cite this comment
<sqlalchemy/sqlalchemy#10549 (comment)>
from SQLAchemy maintainer.
—
Reply to this email directly, view it on GitHub
<#58 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABYLNUR36QEK5DBCZ4JUQDYKHDTLAVCNFSM6AAAAAA52TFJRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRTGE4DCNRWHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@bryancole, should this issue remain open? |
I'm having a problem reading large blobs from a firebird database, using the new firebird-driver.
For large blobs, the firebird driver (both new and old) returns
BlobReader
objects, rather than the fully materialised python bytes-objects. These BlobReader objects are file-like and can be read to obtain the binary data. The thing that has changed in the new firebird driver is that theCursor.close()
method now closes all BlobReader objects associated with that cursor. Unfortunately, when sqlalchemy executes a statement returning data (i.e. calls fetchXXX() on the cursor), it always closes the cursor after iterating over it, but before accessing any of the data in the returned rows. Hence, later on when the data is passed to the Dialect TypeDecorator to handle type-conversions, the cursor has been closed so all the BlobReader objects are closed, so reading the data fails.Although I can't see a way to fix this in the Dialect, I'm wondering if you have any ideas, before I look at modifying the sqlalchemy core to add some sort of hook to customise Cursor closing behaviour.
The author of the new firebird driver seems adamant that the BlobReaders ought to be closed when the Cursor is closed.
The text was updated successfully, but these errors were encountered: