You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DBT's data_type can have more broader set of types that are currently I see in package's source. The type can be any of supported in underlying database. For exple, PostgreSQL varchar type:
Error: invalid operation: Error while calling method: Python error: RuntimeError: Unknown column type of volumes.id: varchar(255)
Package version: 0.6.0
The text was updated successfully, but these errors were encountered:
We are currently experimenting with Cube Cloud after a month or so testing things out with Cube Core. We plan on using dbt to generate our models.
We recently updated our process and now include columns' data_type in the generated YAML model files. We use dbt-codegen: generate_model_yaml to generate the model.
We use Databricks SQL Serverless as our data warehouse
Issue encountered
The resulting manifest.json mentions data types that are not supported by cube-dbt:
I'm not sure that this qualifies as an issue of the cube_dbt codebase. Cube clarifies what data types are allowed, and the list is abysmally small. So, all that cube_dbt is doing is supporting that list.
DBT's data_type can have more broader set of types that are currently I see in package's source. The type can be any of supported in underlying database. For exple, PostgreSQL varchar type:
Package version: 0.6.0
The text was updated successfully, but these errors were encountered: