-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with firebird stream blobs #31
Comments
Note that iteration over BlobReader uses readline, so it works only with text blobs. I considered using read() for binary BLOBs, but rejected the idea, as there is no easy way how to determine the amount of data to be returned. Of course, I could define a BlobReader attribute (for example Also note, that BLOB value does not need to be returned as BlobReader. Normally they are returned as materialized values (either |
Thanks for the feedback. Of course you are right, sqlalchemy is closing the DB-API cursor immediately after iterating over it, and before accessing the data in the row-object. Sadly, there does not seem to be any way to avoid this behaviour in the sqlalchemy core except by modifying the sqlalchemy code directly. I will take this question to the sqlalchemy issue-list. Ultimately, the easiest solution for my problem of reading large-ish BLOBs using sqlalchemy is solved if I set the I am surprised to discover that it is not possible to pass a python file-object to the |
I'm trying to use the new v1.10 driver with SqlAlchemy v1.4. I'm trying to load a BLOB type from my database. The problem is that SqlAlchemy iterates over the connection Cursor to obtain all rows, and then passes these rows to a "processor" object to do type-conversions. The DB cursor returns BlobReader objects for each BLOB column. The BlobReader objects are file-like objects that are read later in the processor object, after the cursor iteration has finished. This used to work OK with the old FDB driver. The problem with the new driver is that the cursor object closes all BlobReaders objects after the initial iteration over the cursor. Thus, any later attempt to read the blob data fails.
I can get things to work if I remove the lines in the Cursor._clear() method (see
python3-driver/src/firebird/driver/core.py
Lines 3642 to 3643 in 1aaa016
I can't figure out what the "correct" DB API compliant behaviour should be as the DB-API doesn't describe any special handling for BLOBs or BlobReader objects. I don't really see that it should be necessary to close the BlobReaders after iteration over the cursor. It's all still within the context of the transaction.
A further related enhancement would be to add a
__bytes__(self)
method to the BlobReader object. Since the basic DB-API assumes the database will return simple types, a typical way to handle a BLOB would be to pass the value to the pythonbytes
constructor. This will try to iterate over the BlobReader using repeated calls toBlobReader.readline()
. This also fails. Even if this succeeded, this is an inefficient way to convert a BLOB to bytes (unless it really is text split into lines). Better to just callBlobReader.read()
and get the entire thing as a bytes object. This is easily implementing in a__bytes__()
method.I could submit patches for these but I'm holding off to see what you think.
The text was updated successfully, but these errors were encountered: