Skip to content
This repository has been archived by the owner on Jun 18, 2020. It is now read-only.

whose UTF8 encoding is longer than the max length 32766 #6

Open
MarceloBaeza opened this issue Jun 19, 2018 · 2 comments
Open

whose UTF8 encoding is longer than the max length 32766 #6

MarceloBaeza opened this issue Jun 19, 2018 · 2 comments

Comments

@MarceloBaeza
Copy link

Dear, I appreciate your contribution to the community.
I have started working with ESA, but when it runs it gives me problems: "Document contains at least one immense term in field =" text "(whose UTF8 encoding is longer than the maximum length 32766), all of which were skipped. The analyzer does not produce such terms The prefix of the first immense term is: '[6c 61 6c 61 6c 64 6b 6a 66 76 6e 74 75 69 76 62 79 6e 65 72 75 72 72 72 72 72 72 72 72 72] ... '"
I tried to change the .bz2 file and not (just to see if they had a problem). I hope you can help me.

@pvoosten
Copy link
Owner

pvoosten commented Jun 26, 2018

Hi @MarceloBaeza, sorry for my late response and thank you for your interest in ESA.

A few answers to the following questions could enable me to reproduce the problem, or help you or someone else to solve the issue without my assistence:

  • In which stage did you encounter problems exactly? Did you follow the readme or did you find an alternative way to work with ESA (no problems with that, btw); EDIT Did you change the analyzer?
  • What is the exact wikipedia dump that you used? Can you give the URL?
  • Is there only one document that causes trouble or is the complete Lucene index unusable?
  • What is the document that causes problems exactly? Can you find its title? Can you retrieve it from Wikipedia?
  • Are non-breaking spaces used in the document instead of regular white space? Is there anything else weird about the document?
  • What does the document look like in the dump file? Could the dump file format be changed since when I first wrote ESA?

@pvoosten
Copy link
Owner

pvoosten commented Jun 26, 2018

Hi @MarceloBaeza,

the prefix you mention would be lalaldkjfvntuivbynerurrrrrrrrr... in UTF-8 encoding. Doesn't make sense to me...

EDIT
The exception is thrown by Lucene because a single term has exceeded the hard coded term length limit of 32766 bytes. More details here on StackOverflow. The answers suggest that the analyzer does produce very long terms. You should find out whether that is really the case.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants