Difference between revisions of "Spring 2017 CS292F Syllabus"

From courses
Jump to: navigation, search
 
(12 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
*04/11 NLP Tasks
 
*04/11 NLP Tasks
 
*04/13 Word embeddings  
 
*04/13 Word embeddings  
** [https://people.cs.umass.edu/~arvind/emnlp2014.pdf Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space, Neelakantan et al., EMNLP 2014]
+
** Christian Bueno: [https://people.cs.umass.edu/~arvind/emnlp2014.pdf Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space, Neelakantan et al., EMNLP 2014]
 
** Keqian Li: [http://www.anthology.aclweb.org/D/D14/D14-1162.pdf Glove: Global Vectors for Word Representation, J Pennington, R Socher, CD Manning - EMNLP, 2014]
 
** Keqian Li: [http://www.anthology.aclweb.org/D/D14/D14-1162.pdf Glove: Global Vectors for Word Representation, J Pennington, R Socher, CD Manning - EMNLP, 2014]
** [http://www.aclweb.org/anthology/P15-1173 AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes, Rothe and Schutze, ACL 2015]
+
** Mengya Tao: [http://www.aclweb.org/anthology/P15-1173 AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes, Rothe and Schutze, ACL 2015]
 
*04/18 Neural network basics (Project proposal due, HW1 out)
 
*04/18 Neural network basics (Project proposal due, HW1 out)
 
** Arturo Deza: [http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Learning representations by back-propagating errors, Nature, 1986]
 
** Arturo Deza: [http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Learning representations by back-propagating errors, Nature, 1986]
Line 13: Line 13:
 
** Rachel Redberg: [https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, Socher et al., EMNLP 2013]
 
** Rachel Redberg: [https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, Socher et al., EMNLP 2013]
 
*04/25 RNNs (NLP seminar: Stanford NLP's Jiwei Li 04/26)
 
*04/25 RNNs (NLP seminar: Stanford NLP's Jiwei Li 04/26)
** Adam Ibrahim: [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]  
+
** [http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf Recurrent neural network based language model]  
 
** Yuanshun Yao: [https://arxiv.org/pdf/1308.0850.pdf Generating Sequences With Recurrent Neural Networks, Alex Graves, 2013 arxiv]
 
** Yuanshun Yao: [https://arxiv.org/pdf/1308.0850.pdf Generating Sequences With Recurrent Neural Networks, Alex Graves, 2013 arxiv]
 
*04/27 LSTMs/GRUs
 
*04/27 LSTMs/GRUs
** Omid Askarisichani: [http://www.bioinf.jku.at/publications/older/2604.pdf Long short term memory, S. Hochreiter and J. Schmidhuber, Neural Computation, 1997]
+
** [http://www.bioinf.jku.at/publications/older/2604.pdf Long short term memory, S. Hochreiter and J. Schmidhuber, Neural Computation, 1997]
** Brandon Huyh: [https://arxiv.org/pdf/1409.1259.pdf On the Properties of Neural Machine Translation: Encoder–Decoder Approaches, Cho et al., 2014]
+
** [https://arxiv.org/pdf/1409.1259.pdf On the Properties of Neural Machine Translation: Encoder–Decoder Approaches, Cho et al., 2014]
 
** Daniel Spokoyny: [https://arxiv.org/pdf/1502.02367v3.pdf Gated Feedback Recurrent Neural Networks, Chung et al., ICML 2015]
 
** Daniel Spokoyny: [https://arxiv.org/pdf/1502.02367v3.pdf Gated Feedback Recurrent Neural Networks, Chung et al., ICML 2015]
 
*05/02 Sequence-to-sequence models and neural machine translation (HW1 due and HW2 out)
 
*05/02 Sequence-to-sequence models and neural machine translation (HW1 due and HW2 out)
Line 27: Line 27:
 
** Zhujun Xiao: [http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf End-to-end memory networks, NIPS 2015]
 
** Zhujun Xiao: [http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf End-to-end memory networks, NIPS 2015]
 
*05/09 Project: mid-term presentation (1)
 
*05/09 Project: mid-term presentation (1)
 +
** JONNALAGADDA, ADITYA
 +
** ZHA, HANWEN
 +
** AGHAKHANI, HOJJAT
 +
** JAIN, ROHAN
 +
** WANG, XIN
 +
** KOUPAEE, MAHNAZ
 +
** YAO, YUANSHUN
 +
** LI, ZHIJING
 
*05/11 Project: mid-term presentation (2)
 
*05/11 Project: mid-term presentation (2)
 +
** SPOKOYNY, DANIEL
 +
** ZHANG, FANGJUN
 +
** FEINN, ZACHARY
 +
** JIN, XIAOYONG
 +
** REDBERG, RACHEL
 +
** XIONG, WENHAN
 +
** ZHAO, YUN
 +
** SADIGH, SHAYAN
 +
** XIAO, ZHUJUN
 +
** ZHANG, XINYI
 
*05/16 Convolutional Neural Networks  (HW2 due)
 
*05/16 Convolutional Neural Networks  (HW2 due)
 
** Zachary Feinn: [http://ronan.collobert.com/pub/matos/2011_nlp_jmlr.pdf Natural Language Processing (Almost) from Scratch, Collobert et al., JMLR 2011]
 
** Zachary Feinn: [http://ronan.collobert.com/pub/matos/2011_nlp_jmlr.pdf Natural Language Processing (Almost) from Scratch, Collobert et al., JMLR 2011]
Line 34: Line 52:
 
** Shiliang Tang: [https://arxiv.org/pdf/1411.4555.pdf Show and Tell: A Neural Image Caption Generator, CVPR 2015]
 
** Shiliang Tang: [https://arxiv.org/pdf/1411.4555.pdf Show and Tell: A Neural Image Caption Generator, CVPR 2015]
 
** Aditya Jonnalagadda: [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Karpathy_Deep_Visual-Semantic_Alignments_2015_CVPR_paper.pdf Deep Visual-Semantic Alignments for Generating Image Descriptions, Andrej Karpathy and Li Fei-Fei, CVPR 2015]
 
** Aditya Jonnalagadda: [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Karpathy_Deep_Visual-Semantic_Alignments_2015_CVPR_paper.pdf Deep Visual-Semantic Alignments for Generating Image Descriptions, Andrej Karpathy and Li Fei-Fei, CVPR 2015]
** Appannacharya Kalyan Tej Javvadi: [http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, Zhu et al., ICCV 2015]
+
** : [http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, Zhu et al., ICCV 2015]
 
*05/23 Deep Reinforcement Learning 1
 
*05/23 Deep Reinforcement Learning 1
 
** Rohan Jain: [https://aclweb.org/anthology/D16-1127, Deep Reinforcement Learning for Dialogue Generation, Li et al., EMNLP 2016]
 
** Rohan Jain: [https://aclweb.org/anthology/D16-1127, Deep Reinforcement Learning for Dialogue Generation, Li et al., EMNLP 2016]
Line 42: Line 60:
 
** Zhijing Li: [https://arxiv.org/pdf/1509.02971.pdf Continuous control with deep reinforcement learning, Lillicrap et al, ICLR 2016]
 
** Zhijing Li: [https://arxiv.org/pdf/1509.02971.pdf Continuous control with deep reinforcement learning, Lillicrap et al, ICLR 2016]
 
*05/30 Unsupervised Learning
 
*05/30 Unsupervised Learning
** Magzhan Zholbaryssov: [https://arxiv.org/abs/1312.6114 Auto-encoding variational Bayes, Kingma and Welling, ICLR 2014]
+
** : [https://arxiv.org/abs/1312.6114 Auto-encoding variational Bayes, Kingma and Welling, ICLR 2014]
** Utkarsh Gaur: [https://arxiv.org/pdf/1511.06434.pdf%C3%AF%C2%BC%E2%80%B0 Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, Redford et al., 2015]
+
** Hojjat Aghakhani: [https://arxiv.org/pdf/1511.06434.pdf%C3%AF%C2%BC%E2%80%B0 Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, Redford et al., 2015]
 
*06/01 Project: final presentation (1)
 
*06/01 Project: final presentation (1)
 +
**ADITYA
 +
**XINYI
 +
**RACHEL
 +
**YUANSHUN
 +
**HANWEN
 +
**ROHAN
 
*06/06 Project: final presentation (2)
 
*06/06 Project: final presentation (2)
 +
**MAHNAZ
 +
**YUN
 +
**XIN
 +
**SHAYAN
 +
**ZHIJING
 +
**ZHUJUN
 +
**ZACHARY
 
*06/08 Project: final presentation (3)
 
*06/08 Project: final presentation (3)
 +
**XIAOYONG
 +
**DANIEL
 +
**WENHAN
 +
**HOJJAT
 +
**SHILIANG
 +
**FANGJUN
 
*06/10 23:59PM PT Project Final Report Due.
 
*06/10 23:59PM PT Project Final Report Due.

Latest revision as of 16:41, 25 May 2017