Code for VetLLM paper
We don't need custom code for fine-tuning LLM now (thanks to the great community), so please refer to the uploaded notebook (which I found in the llama-recipes repo) for a tutorial on general pipeline. The key idea of this paper is to do quick zero-shot & few-shot evaluation, and then to fine-tune if needed.