Difference between revisions of "Winter 2021 CS291A Syllabus"

From courses
Jump to: navigation, search
 
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
Checkout the class presentation schedule for additional readings:
 +
[https://docs.google.com/spreadsheets/d/1p0M7X9OZwcRHT4OhxX3snGjskfltUG-uFIL0T6rLjK8/edit?usp=sharing Class Presentation Schedule]
 +
 
*1/4 Introduction, logistics, and deep learning.
 
*1/4 Introduction, logistics, and deep learning.
 
*1/6 Tips for a successful class project
 
*1/6 Tips for a successful class project
 
*1/11 Neural network basics, & backpropagation  
 
*1/11 Neural network basics, & backpropagation  
*1/13 Word embeddings (Project proposal due [https://forms.gle/TjYSjc5iE1Zm24ED8 submission link], HW1 out)
+
*1/13 Word embeddings (Project proposal due 23:59PT 1/13 [https://forms.gle/TjYSjc5iE1Zm24ED8 submission link], HW1 out)
 
** [https://www.aclweb.org/anthology/Q17-1010/ Enriching Word Vectors with Subword Information]
 
** [https://www.aclweb.org/anthology/Q17-1010/ Enriching Word Vectors with Subword Information]
 
** [https://www.aclweb.org/anthology/C18-1139/ Contextual String Embeddings for Sequence Labeling]
 
** [https://www.aclweb.org/anthology/C18-1139/ Contextual String Embeddings for Sequence Labeling]
** [https://www.aclweb.org/anthology/D14-1162/ GloVe: Global Vectors for Word Representation]
+
*1/18 NO CLASS (University Holiday: Martin Luther King Jr. Day)
*1/18 University Holiday: Martin Luther King Jr. Day
 
 
*1/20 RNNs
 
*1/20 RNNs
 
** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]
 
** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]
 +
** [https://arxiv.org/abs/1502.03240 Conditional Random Fields as Recurrent Neural Networks]
 
*1/25 LSTMs/GRUs
 
*1/25 LSTMs/GRUs
 
** [https://arxiv.org/pdf/1802.05365.pdf Deep contextualized word representations]
 
** [https://arxiv.org/pdf/1802.05365.pdf Deep contextualized word representations]
** [https://arxiv.org/pdf/1801.06146.pdf Universal Language Model Fine-tuning for Text Classification]
+
** [https://arxiv.org/pdf/1410.3916.pdf Memory Networks]
 
*1/27 Sequence-to-sequence models
 
*1/27 Sequence-to-sequence models
 +
** [https://www.aclweb.org/anthology/N19-4009/ fairseq: A Fast, Extensible Toolkit for Sequence Modeling]
 +
** [https://arxiv.org/abs/1511.06732 Sequence Level Training with Recurrent Neural Networks]
 
*2/1 Convolutional Neural Networks (HW1 due and HW2 out)
 
*2/1 Convolutional Neural Networks (HW1 due and HW2 out)
** [https://www.nature.com/articles/s41586-019-1923-7 Improved protein structure prediction using potentials from deep learning]
+
** [https://arxiv.org/abs/1608.06993 Densely Connected Convolutional Networks]
 +
** [https://www.nature.com/articles/s41586-019-1923-7 Improved protein structure prediction using potentials from deep learning]
 
*2/3 Attention mechanisms
 
*2/3 Attention mechanisms
 +
** [https://www.aclweb.org/anthology/D15-1044.pdf A Neural Attention Model for Sentence Summarization]
 +
** [https://arxiv.org/abs/1409.0473 Neural Machine Translation by Jointly Learning to Align and Translate]
 
*2/8 Transformer and BERT (Mid-term report due [https://forms.gle/3mLA46FANTDZ5s5FA submission link])
 
*2/8 Transformer and BERT (Mid-term report due [https://forms.gle/3mLA46FANTDZ5s5FA submission link])
 
** [https://arxiv.org/abs/1906.08237 XLNet: Generalized Autoregressive Pretraining for Language Understanding]
 
** [https://arxiv.org/abs/1906.08237 XLNet: Generalized Autoregressive Pretraining for Language Understanding]
 
** [https://arxiv.org/abs/1907.11692 RoBERTa: A Robustly Optimized BERT Pretraining Approach]
 
** [https://arxiv.org/abs/1907.11692 RoBERTa: A Robustly Optimized BERT Pretraining Approach]
 
** [https://arxiv.org/abs/1909.11942 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations]
 
** [https://arxiv.org/abs/1909.11942 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations]
*2/10 Mid-term project updates
+
*2/10 Mid-term project updates [https://forms.gle/XMKr1nNsJieUK4jD6 upload your slide here] by 2/9 noon
*2/15 University Holiday: Presidents' Day
+
*2/15 NO CLASS (University Holiday: Presidents' Day)
 
*2/17 Language and vision
 
*2/17 Language and vision
 
** [https://openreview.net/pdf?id=YicbFdNTTy An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale]
 
** [https://openreview.net/pdf?id=YicbFdNTTy An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale]
 
** [https://openaccess.thecvf.com/content_CVPR_2020/papers/Chaplot_Neural_Topological_SLAM_for_Visual_Navigation_CVPR_2020_paper.pdf Neural Topological SLAM for Visual Navigation]
 
** [https://openaccess.thecvf.com/content_CVPR_2020/papers/Chaplot_Neural_Topological_SLAM_for_Visual_Navigation_CVPR_2020_paper.pdf Neural Topological SLAM for Visual Navigation]
*2/22 Deep Reinforcement Learning 1 (HW2 due: 02/26 Monday 11:59pm)
+
*2/22 Deep Reinforcement Learning 1  
*2/24 Deep Reinforcement Learning 2
+
** [https://papers.nips.cc/paper/2017/hash/9e82757e9a1c12cb710ad680db11f6f1-Abstract.html Imagination-Augmented Agents for Deep Reinforcement Learning]
 
** [https://openreview.net/pdf?id=S1g2skStPB Causal Discovery with Reinforcement Learning]
 
** [https://openreview.net/pdf?id=S1g2skStPB Causal Discovery with Reinforcement Learning]
 +
*2/24 Deep Reinforcement Learning 2 (HW2 due: 02/26 Friday 11:59pm)
 +
** [https://arxiv.org/abs/1705.05363 Curiosity-driven Exploration by Self-supervised Prediction]
 
** [https://www.nature.com/articles/s41586-019-1724-z Grandmaster level in StarCraft II using multi-agent reinforcement learning]
 
** [https://www.nature.com/articles/s41586-019-1724-z Grandmaster level in StarCraft II using multi-agent reinforcement learning]
 
** [https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]
 
** [https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]
Line 34: Line 44:
 
** [https://arxiv.org/abs/1701.07875 Wasserstein GAN]
 
** [https://arxiv.org/abs/1701.07875 Wasserstein GAN]
 
** [https://arxiv.org/abs/1703.10717 Boundary Equilibrium GAN]
 
** [https://arxiv.org/abs/1703.10717 Boundary Equilibrium GAN]
*3/3 Project: final presentation (1)
+
* Check out final project presentation schedule here: [https://docs.google.com/spreadsheets/d/1T791ZMd4l6IZrdcWhpTuZx-37f_XWEgfd_BtGmEXRrw/edit?usp=sharing schedule]
*3/8 Project: final presentation (2)
+
*3/3 Project: final presentation (1) [https://forms.gle/7d2iVhT322bzY9UCA submission link] by 3/2 noon.
*3/10 Project: final presentation (3)
+
*3/8 Project: final presentation (2) [https://forms.gle/7d2iVhT322bzY9UCA submission link] by 3/7 noon.
 +
*3/10 Project: final presentation (3) [https://forms.gle/7d2iVhT322bzY9UCA submission link] by 3/9 noon.
 
*3/19 23:59PM PT Project Final Report Due [https://forms.gle/kgN8n8XDz83NWdxo9 submission link].
 
*3/19 23:59PM PT Project Final Report Due [https://forms.gle/kgN8n8XDz83NWdxo9 submission link].

Latest revision as of 22:56, 21 February 2021

Checkout the class presentation schedule for additional readings: Class Presentation Schedule