Skip to content
View Coobiw's full-sized avatar

Highlights

  • Pro
Block or Report

Block or report Coobiw

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Coobiw/README.md

Hi ! Here is Coobiw πŸ‘‹

πŸ™‹β€β™‚οΈ About Me:

  • πŸ‘¨β€πŸ¦° I’m currently a M.Phil candidate of Peking University.
  • πŸ‘¦ Before that, I received the (Honours) B.E., HUST.
  • ❀️‍πŸ”₯ Now, I am intersted in Multi-modal Learning especially MLLM.
  • πŸ’₯ In 2023 summer, I take part in OSPP(Open Source Promotion Plan) Summer Camp , with the honor of contributing for MMPretrain to build prompt-based classifier.
    • Now, the implement of zero-shot CLIP classifier has been merged to the main branch. PR Link
    • The implement of RAM(Recognize Anything Model) has been merged to the dev branch. Welcome to use the gradio WebUI to test it on MMPretrain! PR Link
  • πŸ’₯ 2023.10: I implement MiniGPT4Qwen, which is a toy model aligning MiniGPT4 with Qwen-Chat LLM model. I just use 18.8k high quality instruction-tuning data(bi-lingual, selected from minigpt4 and llava). Just fine-tuning the projection layer (3M trainable parameters), this model support Chinese and English! MiniGPT4Qwen
  • πŸ’₯ 2024.2: I extend MiniGPT4Qwen to MPP-Qwen14B(Multimodal Pipeline Parallel), scaling up both the LLM(to Qwen-14B-Chat) and pretrain-data(using LLaVA-pretrain-data). I also unfreeze the whole LLM during SFT-stage. All training is conducted on 3090/4090 GPUs. To prevent poverty (24GB of VRAM) from limiting imagination, I implemented an MLLM version based on deepspeed Pipeline Parallel. Pre-training can be completed in 22 hours on 2x4090s, while SFT requires training on 6x4090s (because it needs to fully activate the LLM), but due to the small amount of data, it only takes several hours.MPP-Qwen14B


Anurag's GitHub stats

Pinned

  1. open-mmlab/mmpretrain open-mmlab/mmpretrain Public

    OpenMMLab Pre-training Toolbox and Benchmark

    Python 3.2k 1k

  2. MiniGPT4Qwen MiniGPT4Qwen Public

    Personal Project: MPP-Qwen14B(Multimodal Pipeline Parallel-Qwen14B). Don't let the poverty limit your imagination! Train your own 14B LLaVA-like MLLM on RTX3090/4090 24GB.

    Jupyter Notebook 253 13

  3. IP-IQA IP-IQA Public

    [ICME2024, Official Code] for paper "Bringing Textual Prompt to AI-Generated Image Quality Assessment"

    4

  4. TriVQA TriVQA Public

    [CVPRW2024, Official Code] for paper "Exploring AIGC Video Quality: A Focus on Visual Harmony, Video-Text Consistency and Domain Distribution Gap".

    8