Difference between revisions of "Spring 2017 CS292F Syllabus"
From courses
Line 13: | Line 13: | ||
** Rachel Redberg: [https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, Socher et al., EMNLP 2013] | ** Rachel Redberg: [https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, Socher et al., EMNLP 2013] | ||
*04/25 RNNs (NLP seminar: Stanford NLP's Jiwei Li 04/26) | *04/25 RNNs (NLP seminar: Stanford NLP's Jiwei Li 04/26) | ||
− | ** | + | ** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model] |
** Yuanshun Yao: [https://arxiv.org/pdf/1308.0850.pdf Generating Sequences With Recurrent Neural Networks, Alex Graves, 2013 arxiv] | ** Yuanshun Yao: [https://arxiv.org/pdf/1308.0850.pdf Generating Sequences With Recurrent Neural Networks, Alex Graves, 2013 arxiv] | ||
*04/27 LSTMs/GRUs | *04/27 LSTMs/GRUs | ||
− | ** | + | ** [http://www.bioinf.jku.at/publications/older/2604.pdf Long short term memory, S. Hochreiter and J. Schmidhuber, Neural Computation, 1997] |
− | ** | + | ** [https://arxiv.org/pdf/1409.1259.pdf On the Properties of Neural Machine Translation: Encoder–Decoder Approaches, Cho et al., 2014] |
** Daniel Spokoyny: [https://arxiv.org/pdf/1502.02367v3.pdf Gated Feedback Recurrent Neural Networks, Chung et al., ICML 2015] | ** Daniel Spokoyny: [https://arxiv.org/pdf/1502.02367v3.pdf Gated Feedback Recurrent Neural Networks, Chung et al., ICML 2015] | ||
*05/02 Sequence-to-sequence models and neural machine translation (HW1 due and HW2 out) | *05/02 Sequence-to-sequence models and neural machine translation (HW1 due and HW2 out) |
Revision as of 12:36, 24 April 2017
- 04/04 Introduction, logistics, NLP, and deep learning.
- 04/06 Tips for a successful class project
- 04/11 NLP Tasks
- 04/13 Word embeddings
- Christian Bueno: Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space, Neelakantan et al., EMNLP 2014
- Keqian Li: Glove: Global Vectors for Word Representation, J Pennington, R Socher, CD Manning - EMNLP, 2014
- Mengya Tao: AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes, Rothe and Schutze, ACL 2015
- 04/18 Neural network basics (Project proposal due, HW1 out)
- 04/20 Recursive Neural Networks
- 04/25 RNNs (NLP seminar: Stanford NLP's Jiwei Li 04/26)
- 04/27 LSTMs/GRUs
- 05/02 Sequence-to-sequence models and neural machine translation (HW1 due and HW2 out)
- 05/04 Attention mechanisms
- 05/09 Project: mid-term presentation (1)
- 05/11 Project: mid-term presentation (2)
- 05/16 Convolutional Neural Networks (HW2 due)
- 05/18 Language and vision
- Shiliang Tang: Show and Tell: A Neural Image Caption Generator, CVPR 2015
- Aditya Jonnalagadda: Deep Visual-Semantic Alignments for Generating Image Descriptions, Andrej Karpathy and Li Fei-Fei, CVPR 2015
- : Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, Zhu et al., ICCV 2015
- 05/23 Deep Reinforcement Learning 1
- 05/25 Deep Reinforcement Learning 2
- 05/30 Unsupervised Learning
- 06/01 Project: final presentation (1)
- 06/06 Project: final presentation (2)
- 06/08 Project: final presentation (3)
- 06/10 23:59PM PT Project Final Report Due.