Skip to content

Latest commit

 

History

History
10 lines (9 loc) · 920 Bytes

README.md

File metadata and controls

10 lines (9 loc) · 920 Bytes

LLM from scratch using PyTorch

This repository contains exercise notebooks for the Introduction to Large Language Models (LLMs) course offered at IIT Madras by Mitesh Khapra and myself. The notebooks contain templates for implementing vanilla transformer architecture, GPT and BERT-like structures from scratch using PyTorch (without using built-in transformer layers). You will end up implementing the following core components

  • Multi-Head Attention (MHA)
  • Multi-Head Masked Attention (MHMA) and Multi-Head Cross attention (MHCA)
  • Position-wise Feed Forward Neural Networks (FFN) (aka MLP)
  • Teacher Forcing and Auto-regressive training
  • Causal Language Modelling (CLM) and Masked Language Modelling (MLM)
  • Text Translation and text generation

The objective is to give you an in-depth understanding of how things work underhood the built-in functions in both PyTorch and high-level APIs such as Hugging-Face.