Difference between revisions of "Winter 2021 CS291A Syllabus"

From courses
Jump to: navigation, search
Line 1: Line 1:
*01/4 Introduction, logistics, and deep learning.
+
*1/4 Introduction, logistics, and deep learning.
*01/6 Tips for a successful class project
+
*1/6 Tips for a successful class project
*01/11 Neural network basics, & backpropagation  
+
*1/11 Neural network basics, & backpropagation  
*01/13 Word embeddings (Project proposal due [https://forms.gle/TjYSjc5iE1Zm24ED8 submission link], HW1 out)
+
*1/13 Word embeddings (Project proposal due [https://forms.gle/TjYSjc5iE1Zm24ED8 submission link], HW1 out)
 
** [https://www.aclweb.org/anthology/Q17-1010/ Enriching Word Vectors with Subword Information]
 
** [https://www.aclweb.org/anthology/Q17-1010/ Enriching Word Vectors with Subword Information]
 
** [https://www.aclweb.org/anthology/C18-1139/ Contextual String Embeddings for Sequence Labeling]
 
** [https://www.aclweb.org/anthology/C18-1139/ Contextual String Embeddings for Sequence Labeling]
 
** [https://www.aclweb.org/anthology/D14-1162/ GloVe: Global Vectors for Word Representation]
 
** [https://www.aclweb.org/anthology/D14-1162/ GloVe: Global Vectors for Word Representation]
*01/18 University Holiday: Martin Luther King Jr. Day
+
*1/18 University Holiday: Martin Luther King Jr. Day
*01/20 RNNs
+
*1/20 RNNs
 
** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]
 
** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]
*01/25 LSTMs/GRUs
+
*1/25 LSTMs/GRUs
 
** [https://arxiv.org/pdf/1802.05365.pdf Deep contextualized word representations]
 
** [https://arxiv.org/pdf/1802.05365.pdf Deep contextualized word representations]
*01/27 Sequence-to-sequence models
+
*1/27 Sequence-to-sequence models
*02/1 Convolutional Neural Networks (HW1 due and HW2 out)
+
*2/1 Convolutional Neural Networks (HW1 due and HW2 out)
 
** [https://www.nature.com/articles/s41586-019-1923-7 Improved protein structure prediction using potentials from deep learning]
 
** [https://www.nature.com/articles/s41586-019-1923-7 Improved protein structure prediction using potentials from deep learning]
*02/3 Attention mechanisms
+
*2/3 Attention mechanisms
*02/8 Transformer and BERT (Mid-term report due [https://forms.gle/3mLA46FANTDZ5s5FA submission link])
+
*2/8 Transformer and BERT (Mid-term report due [https://forms.gle/3mLA46FANTDZ5s5FA submission link])
*02/10 BERT (continued)
+
*2/10 BERT (continued)
*02/15 University Holiday: Presidents' Day
+
*2/15 University Holiday: Presidents' Day
*02/17 Language and vision
+
*2/17 Language and vision
 
**[https://openreview.net/pdf?id=YicbFdNTTy An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale]
 
**[https://openreview.net/pdf?id=YicbFdNTTy An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale]
*02/22 Deep Reinforcement Learning 1 (HW2 due: 02/26 Monday 11:59pm)
+
*2/22 Deep Reinforcement Learning 1 (HW2 due: 02/26 Monday 11:59pm)
*02/24 Deep Reinforcement Learning 2
+
*2/24 Deep Reinforcement Learning 2
 
** [https://openreview.net/pdf?id=S1g2skStPB Causal Discovery with Reinforcement Learning]
 
** [https://openreview.net/pdf?id=S1g2skStPB Causal Discovery with Reinforcement Learning]
 
** [https://www.nature.com/articles/s41586-019-1724-z Grandmaster level in StarCraft II using multi-agent reinforcement learning]
 
** [https://www.nature.com/articles/s41586-019-1724-z Grandmaster level in StarCraft II using multi-agent reinforcement learning]
 
** [https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]
 
** [https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]
*03/1 Generative Adversarial Networks
+
*3/1 Generative Adversarial Networks
 
** [https://arxiv.org/abs/1701.04862 Towards Principled Methods for Training Generative Adversarial Networks]
 
** [https://arxiv.org/abs/1701.04862 Towards Principled Methods for Training Generative Adversarial Networks]
 
** [https://arxiv.org/abs/1701.07875 Wasserstein GAN]
 
** [https://arxiv.org/abs/1701.07875 Wasserstein GAN]
 
** [https://arxiv.org/abs/1703.10717 Boundary Equilibrium GAN]
 
** [https://arxiv.org/abs/1703.10717 Boundary Equilibrium GAN]
*03/3 Project: final presentation (1)
+
*3/3 Project: final presentation (1)
*03/8 Project: final presentation (2)
+
*3/8 Project: final presentation (2)
*03/10 Project: final presentation (3)
+
*3/10 Project: final presentation (3)
*03/19 23:59PM PT Project Final Report Due [https://forms.gle/kgN8n8XDz83NWdxo9 submission link].
+
*3/19 23:59PM PT Project Final Report Due [https://forms.gle/kgN8n8XDz83NWdxo9 submission link].

Revision as of 01:50, 1 January 2021