For more information about Stanford's Artificial Intelligence professional and graduate programs visit: https://stanford.io/3CvTOGY
This lecture covers:
1. Impact of Transformers on NLP (and ML more broadly)
2. From Recurrence (RNNs) to Attention-Based NLP Models
3. Understanding the Transformer Model
4. Drawbacks and Variants of Transformers
To learn more about this course visit: https://online.stanford.edu/courses/cs224n-natural-language-processing-deep-learning
To follow along with the course schedule and syllabus visit: http://web.stanford.edu/class/cs224n/
John Hewitt
PhD student in Computer Science at Stanford University
Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
#deeplearning #naturallanguageprocessing
Stanford,NLP,AI,Deep Learning,CS224N