Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Long rows #32

Open
Madnex opened this issue Jul 14, 2023 · 0 comments
Open

Long rows #32

Madnex opened this issue Jul 14, 2023 · 0 comments

Comments

@Madnex
Copy link

Madnex commented Jul 14, 2023

Hi! I did a few first experiments with GReaT and like it already :)

I was wondering if you thought about how to tackle current token limits of LLMs? If I understand correctly, during training and generation it generates one row at a time. Hence, the token limit effectively limits the length a row can have (in text form).

For now I had only the following ideas to fit data with many features better into that token limit:

  • "Compress" the feature names: Reducing the length of the column names to avoid token overhead by renaming / encoding the feature names to more token friendly strings.
  • The same for categorical values that are too long.

For example if one column originally would be named "Patient disease name" and an original value would be "Creutzfeldt–Jakob disease" it could be changed to the column name "Disease" with the value "CJ".

Do you think this approach makes sense?

I am struggling to find a way for text features especially. Ironically, the ones seemingly ideally suited for this LLM approach. I have some columns containing free form text. Unfortunately, those exceed the token limit regularly. Do you have any recommendations how do deal with this scenario?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant