Description
I'm having a problem reading large blobs from a firebird database, using the new firebird-driver.
For large blobs, the firebird driver (both new and old) returns BlobReader
objects, rather than the fully materialised python bytes-objects. These BlobReader objects are file-like and can be read to obtain the binary data. The thing that has changed in the new firebird driver is that the Cursor.close()
method now closes all BlobReader objects associated with that cursor. Unfortunately, when sqlalchemy executes a statement returning data (i.e. calls fetchXXX() on the cursor), it always closes the cursor after iterating over it, but before accessing any of the data in the returned rows. Hence, later on when the data is passed to the Dialect TypeDecorator to handle type-conversions, the cursor has been closed so all the BlobReader objects are closed, so reading the data fails.
Although I can't see a way to fix this in the Dialect, I'm wondering if you have any ideas, before I look at modifying the sqlalchemy core to add some sort of hook to customise Cursor closing behaviour.
The author of the new firebird driver seems adamant that the BlobReaders ought to be closed when the Cursor is closed.