Difference between revisions of "Winter 2018 CS291A Syllabus"

From courses
Jump to: navigation, search
Line 7: Line 7:
 
**Wenhu : [http://www.aclweb.org/anthology/P15-1173 AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes, Rothe and Schutze, ACL 2015]
 
**Wenhu : [http://www.aclweb.org/anthology/P15-1173 AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes, Rothe and Schutze, ACL 2015]
 
*01/30 Neural network basics (Project proposal due to Grader: Ke Ni < ke00@ucsb.edu> , HW1 out)
 
*01/30 Neural network basics (Project proposal due to Grader: Ke Ni < ke00@ucsb.edu> , HW1 out)
**Mohith : [http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Learning representations by back-propagating errors, Nature, 1986]
+
** : [http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Learning representations by back-propagating errors, Nature, 1986]
**Dan : [https://arxiv.org/abs/1609.04747 An overview of gradient descent optimization algorithms, Sebastian Ruder, Arxiv 2016]
+
** : [https://arxiv.org/abs/1609.04747 An overview of gradient descent optimization algorithms, Sebastian Ruder, Arxiv 2016]
 
**Vivek P.: [http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al., JMLR 2014]
 
**Vivek P.: [http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al., JMLR 2014]
 
*02/01 Recursive Neural Networks  
 
*02/01 Recursive Neural Networks  

Revision as of 09:26, 29 January 2018