Difference between revisions of "Fall 2017 CS595I Advanced NLP/ML Seminar"

From courses
Jump to: navigation, search
Line 64: Line 64:
 
* Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning, Gu et al., NIPS 2017 https://arxiv.org/abs/1706.00387
 
* Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning, Gu et al., NIPS 2017 https://arxiv.org/abs/1706.00387
 
* Sequence Level Training with Recurrent Neural Networks https://arxiv.org/pdf/1511.06732.pdf
 
* Sequence Level Training with Recurrent Neural Networks https://arxiv.org/pdf/1511.06732.pdf
 
===Learning (General)===
 
* Understanding Black-box Predictions via Influence Functions, Koh and Liang, ICML 2017 Best Paper. https://arxiv.org/pdf/1703.04730.pdf
 
  
 
===Generation===
 
===Generation===

Revision as of 13:40, 11 October 2017

Time: Tuesday 5-6pm. Location: HFH 1132.

If you registered this class, you should contact the instructor to lead the discussion of one paper below. If you don't lead the discussion, you will then need to write a 3-page final report in NIPS 2017 style, comparing any two of the papers below.

  • 09/26:
    • Mahnaz Summer research presentation: Reinforced Pointer-Generator Network for Abstractive Summarization.
    • Xin: FeUdal Networks for Hierarchical Reinforcement Learning, Vezhnevets et al., ICML 2017 https://arxiv.org/pdf/1703.01161.pdf
  • 11/28:
  • 12/05: No meeting, NIPS conference.
  • 12/12: No meeting, NAACL deadline.

Word Embeddings

Relational Learning and Reasoning

Reinforcement Learning

Generation

Dialog

NLP for Computational Social Science