Difference between revisions of "Winter 2018 CS595I Advanced NLP/ML Seminar"

From courses
Jump to: navigation, search
(Reinforcement Learning)
(Reinforcement Learning)
Line 17: Line 17:
  
 
===Reinforcement Learning===
 
===Reinforcement Learning===
 +
* Shallow Updates for Deep Reinforcement Learning, Levine et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=9098
 
* Imagination-Augmented Agents for Deep Reinforcement Learning Racanière et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=10081
 
* Imagination-Augmented Agents for Deep Reinforcement Learning Racanière et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=10081
 
* Robust Imitation of Diverse Behaviors, Wang et al. 2017, https://arxiv.org/pdf/1707.02747.pdf
 
* Robust Imitation of Diverse Behaviors, Wang et al. 2017, https://arxiv.org/pdf/1707.02747.pdf
Line 24: Line 25:
 
* Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning, Gu et al., NIPS 2017 https://arxiv.org/abs/1706.00387
 
* Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning, Gu et al., NIPS 2017 https://arxiv.org/abs/1706.00387
 
* Sequence Level Training with Recurrent Neural Networks https://arxiv.org/pdf/1511.06732.pdf
 
* Sequence Level Training with Recurrent Neural Networks https://arxiv.org/pdf/1511.06732.pdf
 +
* Hybrid Reward Architecture for Reinforcement Learning, Van Seijen et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=9314
 +
* Cold-Start Reinforcement Learning with Softmax Policy Gradient, Ding and Soirut, NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=9067
  
 
===Generation===
 
===Generation===

Revision as of 00:27, 4 January 2018

Time: TBD Location: HFH 1132.

If you registered this class, you should contact the instructor to present one paper *and* be the discussant of two papers below.

  • Presenter: prepare a short summary of no more than 15 mins of presentation.
  • Discussant: prepare two questions for discussion about the paper.

If you don't present or lead the discussion, you will then need to write a 2-page final report in ICML 2018 style, comparing any two of the papers below. Due: TBD to william@cs.ucsb.edu.

Word Embeddings

Relational Learning and Reasoning

Reinforcement Learning

Generation

Dialog

Learning

NLP for Computational Social Science