Subject associations
ORF 570
Term
Spring 2025
Instructors
Registrar description
This course explores cutting-edge aspects of transformers and large language models, which have revolutionized natural language processing and various other domains in artificial intelligence. Key topics include transformer architecture fundamentals, self-attention mechanisms and positional encodings, probabilistic foundations of language modeling and sequence prediction, pretraining strategies and transfer learning in language models, scaling laws and the implications of model size on performance, fine-tuning techniques for specific tasks and domains, and efficiency improvements and model compression techniques.