Difference between revisions of "Winter 2018 CS595I Advanced NLP/ML Seminar"

From courses
Jump to: navigation, search
Line 15: Line 15:
 
* 01/27
 
* 01/27
 
** [Mahnaz] Sequence Level Training with Recurrent Neural Networks, Ranzato et al., https://arxiv.org/abs/1511.06732
 
** [Mahnaz] Sequence Level Training with Recurrent Neural Networks, Ranzato et al., https://arxiv.org/abs/1511.06732
 +
** [Zimu] Programmable Agents, Denil et al., https://arxiv.org/pdf/1706.06383v1.pdf
  
 
* 02/05
 
* 02/05
Line 27: Line 28:
 
* Imagination-Augmented Agents for Deep Reinforcement Learning Racanière et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=10081
 
* Imagination-Augmented Agents for Deep Reinforcement Learning Racanière et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=10081
 
* Robust Imitation of Diverse Behaviors, Wang et al. 2017, https://arxiv.org/pdf/1707.02747.pdf
 
* Robust Imitation of Diverse Behaviors, Wang et al. 2017, https://arxiv.org/pdf/1707.02747.pdf
* Programmable Agents, Denil et al., https://arxiv.org/pdf/1706.06383v1.pdf
 
 
* Compatible Reward Inverse Reinforcement Learning Metelli et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=8993
 
* Compatible Reward Inverse Reinforcement Learning Metelli et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=8993
 
* Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation, Wu et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=10087
 
* Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation, Wu et al., NIPS 2017 https://nips.cc/Conferences/2017/Schedule?showEvent=10087

Revision as of 21:44, 22 January 2018

Time: Monday 5-6pm, starting 01/22. Location: HFH 1132.

If you registered this class, you should contact the instructor to present one paper *and* be the discussant of one paper below.

  • Presenter: prepare a short summary of no more than 15 mins of presentation.
  • Discussant: by presenting a paper in one session, you automatically become the discussant of the other paper. Please prepare two questions for discussion about the paper.

If you don't present or lead the discussion, you will then need to write a 2-page final report in ICML 2018 style, comparing any two of the papers below. Due: TBD to william@cs.ucsb.edu.

Relational Learning and Reasoning

Reinforcement Learning

Generation

Dialog

Learning

NLP for Computational Social Science