Pinned Loading
-
microsoft/LLMLingua
microsoft/LLMLingua Public[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
-
microsoft/PhysioPro
microsoft/PhysioPro PublicA deep learning framework for physiological data processing and understanding.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.