Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
May 19, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
✨✨Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.
The official GitHub page for the survey paper "A Survey of Large Language Models".
[KDD2024] "UrbanGPT: Spatio-Temporal Large Language Models"
[KDD2024] "HiGPT: Heterogenous Graph Language Models"
This repo contains a list of channels and sources from where LLMs should be learned
A one-stop data processing system to make data higher-quality, juicier, and more digestible for LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大语言模型提供更高质量、更丰富、更易”消化“的数据!
总结Prompt&LLM论文,开源数据&模型,AIGC应用
Preprint: Less: Selecting Influential Data for Targeted Instruction Tuning
Video Foundation Models & Data for Multimodal Understanding
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
[ICML2024] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) excelling in free-form text-image composition and comprehension.
Discourse chat data crawling and on-the-way parsing straight for LLM instruction finetuning.
Generative Representational Instruction Tuning
This repository has a lot of LLM projects done. It is the best place to start learning LLM.
Datasets collection and preprocessings framework for NLP extreme multitask learning
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."