Difference between revisions of "Winter 2021 CS291A Syllabus"

From courses
Jump to: navigation, search
(Created page with "*01/16 Introduction, logistics, NLP, and deep learning. *01/18 Tips for a successful class project *01/23 NLP Tasks *01/25 Word embeddings **Conner : [https://people.cs.umass...")
 
 
(46 intermediate revisions by the same user not shown)
Line 1: Line 1:
*01/16 Introduction, logistics, NLP, and deep learning.
+
Checkout the class presentation schedule for additional readings:
*01/18 Tips for a successful class project
+
[https://docs.google.com/spreadsheets/d/1p0M7X9OZwcRHT4OhxX3snGjskfltUG-uFIL0T6rLjK8/edit?usp=sharing Class Presentation Schedule]
*01/23 NLP Tasks
 
*01/25 Word embeddings
 
**Conner : [https://people.cs.umass.edu/~arvind/emnlp2014.pdf Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space, Neelakantan et al., EMNLP 2014]
 
**Sanjana : [http://www.anthology.aclweb.org/D/D14/D14-1162.pdf Glove: Global Vectors for Word Representation, J Pennington, R Socher, CD Manning - EMNLP, 2014]
 
**Wenhu : [http://www.aclweb.org/anthology/P15-1173 AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes, Rothe and Schutze, ACL 2015]
 
*01/30 Neural network basics (Project proposal due to Grader: Ke Ni < ke00@ucsb.edu> , HW1 out)
 
**Jashanvir : [http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Learning representations by back-propagating errors, Nature, 1986]
 
**Metehan : [https://arxiv.org/abs/1609.04747 An overview of gradient descent optimization algorithms, Sebastian Ruder, Arxiv 2016]
 
**Vivek P.: [http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al., JMLR 2014]
 
*02/01 Recursive Neural Networks
 
**April : [http://www.robotics.stanford.edu/~ang/papers/emnlp12-SemanticCompositionalityRecursiveMatrixVectorSpaces.pdf Semantic Compositionality through Recursive Matrix-Vector Spaces, Socher et al., EMNLP 2012]
 
**Zhiyu : [https://nlp.stanford.edu/pubs/SocherBauerManningNg_ACL2013.pdf Parsing with Compositional Vector Grammars, Socher et al., ACL 2013]
 
**Andy : [https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, Socher et al., EMNLP 2013]
 
*02/06 RNNs
 
**Lukas : [https://pdfs.semanticscholar.org/8adb/8257a423f55b1f20ba62c8b20118d76a25c7.pdf A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Ronald J. Williams and David Zipser, 1989]
 
**Yifu : [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]
 
**John : [https://arxiv.org/pdf/1308.0850.pdf Generating Sequences With Recurrent Neural Networks, Alex Graves, 2013 arxiv]
 
*02/08 LSTMs/GRUs
 
**Liu : [http://www.bioinf.jku.at/publications/older/2604.pdf Long short term memory, S. Hochreiter and J. Schmidhuber, Neural Computation, 1997]
 
**Nidhi : [https://arxiv.org/pdf/1409.1259.pdf On the Properties of Neural Machine Translation: Encoder–Decoder Approaches, Cho et al., 2014]
 
**Vivek A.: [https://arxiv.org/pdf/1502.02367v3.pdf Gated Feedback Recurrent Neural Networks, Chung et al., ICML 2015]
 
*02/13 Sequence-to-sequence models and neural machine translation (HW1 due and HW2 out)
 
**Ryan : [https://arxiv.org/pdf/1406.1078.pdf Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation, Cho et al., EMNLP 2014]
 
**Yanju : [https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Sequence to Sequence Learning with Neural Networks, Sutskever et al., NIPS 2014]
 
**Karthik : [http://www.aclweb.org/anthology/P16-1100 Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models, Luong and Manning, ACL 2016]
 
*02/15 Attention mechanisms
 
**Jing : [https://arxiv.org/pdf/1409.0473.pdf NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE, Bahdanau et al., ICLR 2015]
 
**Abhay : [https://arxiv.org/abs/1506.03340 Teaching Machines to Read and Comprehend, NIPS 2015]
 
**Ashwini : [http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf End-to-end memory networks, NIPS 2015]
 
*02/20 Convolutional Neural Networks  (Mid-term report due to Grader: Ke Ni <ke00@ucsb.edu>)
 
**Esther : [http://ronan.collobert.com/pub/matos/2011_nlp_jmlr.pdf Natural Language Processing (Almost) from Scratch, Collobert et al., JMLR 2011]
 
**Maohua : [https://arxiv.org/pdf/1510.03820.pdf A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification, Zhang and Wallace, Arxiv 2015]
 
**Jiawei : [http://papers.nips.cc/paper/5550-convolutional-neural-network-architectures-for-matching-natural-language-sentences Convolutional Neural Network Architectures for Matching Natural Language Sentences, Hu et al., NIPS 2014]
 
*02/22 Language and vision
 
**Sai : [https://arxiv.org/pdf/1411.4555.pdf Show and Tell: A Neural Image Caption Generator, CVPR 2015]
 
**Xiyou : [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Karpathy_Deep_Visual-Semantic_Alignments_2015_CVPR_paper.pdf Deep Visual-Semantic Alignments for Generating Image Descriptions, Andrej Karpathy and Li Fei-Fei, CVPR 2015]
 
**Richika : [http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, Zhu et al., ICCV 2015]
 
*02/27 Deep Reinforcement Learning 1 (HW2 due: 02/26 Monday 11:59pm)
 
**Sharon : [https://aclweb.org/anthology/D16-1127, Deep Reinforcement Learning for Dialogue Generation, Li et al., EMNLP 2016]
 
**David : [https://arxiv.org/abs/1603.07954 Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning, Narasimh et al., EMNLP 2016]
 
**Michael : [http://www.aclweb.org/anthology/P16-1153 Deep Reinforcement Learning with a Natural Language Action Space, He et al., ACL 2016]
 
*03/01 Deep Reinforcement Learning 2
 
**Trevor : [https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf Playing Atari with Deep Reinforcement Learning, Mnih et al., NIPS workshop 2013]
 
**Calvin : [https://arxiv.org/pdf/1509.02971.pdf Continuous control with deep reinforcement learning, Lillicrap et al, ICLR 2016]
 
**Chani : [https://www.nature.com/articles/nature16961 Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al., Nature]
 
*03/06 Unsupervised Learning
 
**Hongmin : [http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Generative Adversarial Nets, Goodfellow et al., NIPS 2014]
 
**Burak : [https://arxiv.org/abs/1312.6114 Auto-encoding variational Bayes, Kingma and Welling, ICLR 2014]
 
**Pushkar : [https://arxiv.org/pdf/1511.06434.pdf%C3%AF%C2%BC%E2%80%B0 Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, Redford et al., 2015]
 
**Liu : [http://papers.nips.cc/paper/5949-semi-supervised-sequence-learning.pdf Semi-supervised Sequence Learning, Dai et al., NIPS 2015]
 
  
*03/08 Project: final presentation (1)  
+
*1/4 Introduction, logistics, and deep learning.
*03/13 Project: final presentation (2)
+
*1/6 Tips for a successful class project
*03/15 Project: final presentation (3)
+
*1/11 Neural network basics, & backpropagation
 
+
*1/13 Word embeddings (Project proposal due 23:59PT 1/13 [https://forms.gle/TjYSjc5iE1Zm24ED8 submission link], HW1 out)
 
+
** [https://www.aclweb.org/anthology/Q17-1010/ Enriching Word Vectors with Subword Information]
*03/19 23:59PM PT Project Final Report Due.
+
** [https://www.aclweb.org/anthology/C18-1139/ Contextual String Embeddings for Sequence Labeling]
 +
*1/18 NO CLASS (University Holiday: Martin Luther King Jr. Day)
 +
*1/20 RNNs
 +
** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]
 +
** [https://arxiv.org/abs/1502.03240 Conditional Random Fields as Recurrent Neural Networks]
 +
*1/25 LSTMs/GRUs
 +
** [https://arxiv.org/pdf/1802.05365.pdf Deep contextualized word representations]
 +
** [https://arxiv.org/pdf/1410.3916.pdf Memory Networks]
 +
*1/27 Sequence-to-sequence models
 +
** [https://www.aclweb.org/anthology/N19-4009/ fairseq: A Fast, Extensible Toolkit for Sequence Modeling]
 +
** [https://arxiv.org/abs/1511.06732 Sequence Level Training with Recurrent Neural Networks]
 +
*2/1 Convolutional Neural Networks (HW1 due and HW2 out)
 +
** [https://arxiv.org/abs/1608.06993 Densely Connected Convolutional Networks]
 +
** [https://www.nature.com/articles/s41586-019-1923-7 Improved protein structure prediction using potentials from deep learning]
 +
*2/3 Attention mechanisms
 +
** [https://www.aclweb.org/anthology/D15-1044.pdf A Neural Attention Model for Sentence Summarization]
 +
** [https://arxiv.org/abs/1409.0473 Neural Machine Translation by Jointly Learning to Align and Translate]
 +
*2/8 Transformer and BERT (Mid-term report due [https://forms.gle/3mLA46FANTDZ5s5FA submission link])
 +
** [https://arxiv.org/abs/1906.08237 XLNet: Generalized Autoregressive Pretraining for Language Understanding]
 +
** [https://arxiv.org/abs/1907.11692 RoBERTa: A Robustly Optimized BERT Pretraining Approach]
 +
** [https://arxiv.org/abs/1909.11942 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations]
 +
*2/10 Mid-term project updates [https://forms.gle/XMKr1nNsJieUK4jD6 upload your slide here] by 2/9 noon
 +
*2/15 NO CLASS (University Holiday: Presidents' Day)
 +
*2/17 Language and vision
 +
** [https://openreview.net/pdf?id=YicbFdNTTy An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale]
 +
** [https://openaccess.thecvf.com/content_CVPR_2020/papers/Chaplot_Neural_Topological_SLAM_for_Visual_Navigation_CVPR_2020_paper.pdf Neural Topological SLAM for Visual Navigation]
 +
*2/22 Deep Reinforcement Learning 1
 +
** [https://papers.nips.cc/paper/2017/hash/9e82757e9a1c12cb710ad680db11f6f1-Abstract.html Imagination-Augmented Agents for Deep Reinforcement Learning]
 +
** [https://openreview.net/pdf?id=S1g2skStPB Causal Discovery with Reinforcement Learning]
 +
*2/24 Deep Reinforcement Learning 2 (HW2 due: 02/26 Friday 11:59pm)
 +
** [https://arxiv.org/abs/1705.05363 Curiosity-driven Exploration by Self-supervised Prediction]
 +
** [https://www.nature.com/articles/s41586-019-1724-z Grandmaster level in StarCraft II using multi-agent reinforcement learning]
 +
** [https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]
 +
*3/1 Generative Adversarial Networks
 +
** [https://arxiv.org/abs/1701.04862 Towards Principled Methods for Training Generative Adversarial Networks]
 +
** [https://arxiv.org/abs/1701.07875 Wasserstein GAN]
 +
** [https://arxiv.org/abs/1703.10717 Boundary Equilibrium GAN]
 +
* Check out final project presentation schedule here: [https://docs.google.com/spreadsheets/d/1T791ZMd4l6IZrdcWhpTuZx-37f_XWEgfd_BtGmEXRrw/edit?usp=sharing schedule]
 +
*3/3 Project: final presentation (1) [https://forms.gle/7d2iVhT322bzY9UCA submission link] by 3/2 noon.
 +
*3/8 Project: final presentation (2) [https://forms.gle/7d2iVhT322bzY9UCA submission link] by 3/7 noon.
 +
*3/10 Project: final presentation (3) [https://forms.gle/7d2iVhT322bzY9UCA submission link] by 3/9 noon.
 +
*3/19 23:59PM PT Project Final Report Due [https://forms.gle/kgN8n8XDz83NWdxo9 submission link].

Latest revision as of 21:56, 21 February 2021

Checkout the class presentation schedule for additional readings: Class Presentation Schedule