-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SQLAlchemy + Pandas very slow when compared to AWS Wrangler #254
Comments
Thank you. I’ve seen all these links and methods before. But they won’t clarify how to use this with SQLAlchemy. And I need to use SQLAlchemy. So, is there any way to use PyAthena with SQLAlchemy and Pandas that would match acceptable performance? Maybe a flag in the URL or something. |
I do not have an implementation that uses a combination of SQLAlchemy and Pandas. What is the use case for that? |
I have a Pandas application that is independent of database. So SQLAlchemy provides a unified single interface and code doesn’t need to be changed when changing database backend, just the URL. |
For now, the current implementation does not allow the use of PandasCursor and SQLAlchemy in combination. |
Are there any guides on how to use it? I’m using SQLALCHEMY query builder to query Athena through pandas read_sql_query(). Or simply the use case I put in code in the notebook attached to this bug report. Also, I’d suggest to be more explicit about this problem when using PyAthena with SQLALCHEMY and Pandas. Simply to suggest to not use SQLALCHEMY, to bypass SQLALCHEMY, is not a solution. |
What do you need a guide for, a guide for PandasCursor? |
I don't know how to use it in this situation: conn_str = "awsathena+rest://:@athena.{region_name}.amazonaws.com:443/{schema_name}?work_group={work_group}&compression=snappy"
pyathena_sqlalchemy_conn = sqlalchemy.create_engine(
url = conn_str.format(
region_name = region_name,
schema_name = schema_name,
work_group = work_group
),
echo_pool = True,
)
pyathena_df = pd.read_sql_query(query,con=pyathena_sqlalchemy_conn) What should I pass to Thank you in advance |
In the current implementation, SQLAlchemy and PandasCursor cannot be used in combination. As long as you use SQLAlchmey, you cannot change the default cursor. If you do not use SQLAlchemy, you can use PandasCursor as follows.
|
If you want to use PandasCursor in combination with SQLAlchemy, you probably need to modify the part that assembles the arguments of the connection object from the URL. |
This new method apparently reduced query time from 16min to 6m35s. But still far from beating AWS Wrangler (without SQLALchemy, indeed) 1m15s: conn_str = "awsathena+rest://:@athena.{region_name}.amazonaws.com:443/{schema_name}?work_group={work_group}&compression=snappy"
sqlalchemy_connect_args={}
if "awsathena+rest" in conn_str:
from pyathena.pandas.cursor import PandasCursor
sqlalchemy_connect_args.update(
dict(
cursor_class=PandasCursor
)
)
pyathena_sqlalchemy_conn = sqlalchemy.create_engine(
url = conn_str.format(
region_name = region_name,
schema_name = schema_name,
work_group = work_group
),
connect_args=sqlalchemy_connect_args,
echo_pool = True,
)
cursor = pyathena_sqlalchemy_conn.raw_connection()
pyathena_df = pd.read_sql_query(query,con=cursor)
pyathena_df.info() It consists of getting a PandasCursor from SQLAlchemy's raw connections. But I don't know yet if I can reuse the cursor or should I get a new cursor for each query. |
Here is a simplification. Don't need to get a conn_str = "awsathena+rest://:@athena.{region_name}.amazonaws.com:443/{schema_name}?work_group={work_group}&compression=snappy"
sqlalchemy_connect_args={}
if "awsathena+rest" in conn_str:
from pyathena.pandas.cursor import PandasCursor
sqlalchemy_connect_args.update(
dict(
cursor_class=PandasCursor
)
)
pyathena_sqlalchemy_conn = sqlalchemy.create_engine(
url = conn_str.format(
region_name = region_name,
schema_name = schema_name,
work_group = work_group
),
connect_args=sqlalchemy_connect_args,
echo_pool = True,
)
pyathena_df = pd.read_sql_query(query,con=pyathena_sqlalchemy_conn) Still, PyAthena alone or AWS Wrangler (both without SQLAlchemy) run same query in 1m15s, for 2 million rows result. Villain here is SQLAlchemy or the integration between SQLAlchemy and PyAthena. I hope this can be solved in the future in order make PyAthena+SQLAlchemy to deliver same performance as PyAthena alone. SQLAlchemy is desired because it makes your code more elegant and succinct, and makes your app look nicer and independent from database engine. |
For reference, here is the notebook used in this experiment: |
The PandasCursor itself already has a DataFrame, but it handles the fetching of the DataFrame to comply with the DB-API interface. |
So indeed all this abstraction from SQLAlchemy comes with a toll. |
Add AthenaPandas dialect (refs: #254)
Use to_dict instead of Iterrows (ref #254)
I have implemented various performance improvements.
I think Pandas cursor can get a large result set at the same speed as awswrangler(aws-sdk-pandas) by combining the unload option with our own S3 file system implementation (implemented in v2.18.0). |
Tested same query that returns more than 2 million lines.
AWS Wrangler takes 1m34s to return a DataFrame.
SQLAlchemy + PyAthena takes 16m37s to return a DataFrame.
See attached notebook for proof and methods.
Also, PyAthena apparently returns an object almost 9% bigger. But this can be due to Pandas, SQLAlchemy, data types and other minor things that I wouldn’t care right now.
The text was updated successfully, but these errors were encountered: