Spring 2023 CS190I Introduction to Natural Language Processing

From courses
Revision as of 12:23, 16 April 2023 by Alexmei165b (talk | contribs) (Instructor and Venue)

Jump to: navigation, search

Instructor and Venue

  • Instructor: William Wang
  • TA:
    • Alex Mei - alexmei@ucsb.edu
    • Michael Saxon - saxon@ucsb.edu
    • Jiachen Li - jiachen_li@ucsb.edu
  • Time: Tu Th 2:00-3:15pm
  • Location: ILP 1101
  • Discussions:
    • Starting 2nd Week, Friday 2-3pm in Phelps 1445
    • Midterm Review, Friday April 28th 2-3pm in Henley 1010
    • Final Review, Tuesday June 6th 6-7pm in Henley 1010
  • Instructor Office Hours: Tuesday 11-12noon - Henley 2005 starting second week April 11th
  • TA Office Hours:
    • Alex Mei: Monday 4-5pm - outside Henley 2113 (NLP Lab) starting second week April 10th
    • Michael Saxon - Wednesday 4-5pm - outside Henley 2113 (NLP Lab) starting second week April 12th
    • Jiachen Li: Thursday 11-12noon - outside Henley 2113 (NLP Lab) starting second week April 13th
  • Prerequisites:
    • Good programming skills and knowledge of data structure (e.g., CS 130A)
    • Basic understanding about automata and parsing (e.g., CS 138)
    • Advanced knowledge in machine learning (CS 165B), artificial intelligence (CS 165A), linear algebra, probability, and calculus.

Course Objective

At the end of the quarter, students should have a good understanding about basic NLP tasks, and should be able to implement some fundamental algorithms for simple problems in NLP. Students will also develop an understanding of the open research problems in NLP.

Piazza

Please enroll if you haven't: https://piazza.com/ucsb/spring2023/cs190i

Syllabus

  • 04/04 Introduction & logistics, and NLP applications (Guest Lecturer: Sharon Levy)
  • 04/06 Basic text processing
  • 04/11 N-grams & language models & HW1 out
  • 04/13 Voted perceptron and logistic regression
  • 04/18 Part-of-speech tagging: HMMs
  • 04/20 HMMs and MEMMs
  • 04/25 Conditional Random Fields & HW1 due / HW2 out
  • 04/27 Distributional semantics (1): sparse representation
  • 05/02 In-class midterm exam
  • 05/04 Distributional semantics (2): dense representation
  • 05/09 Language and Vision (Guest Lecturer: Wanrong Zhu) & HW2 due / HW3 out
  • 05/11 Machine translation & Question Answering
  • 05/16 Deep learning for NLP: MLP and RNNs.
  • 05/18 Deep learning for NLP: LSTMs and CNNs.
  • 05/23 Responsible NLP (Guest Lecturer: Alex Mei) & HW3 due / HW4 out
  • 05/25 Dialog Systems (Guest Lecturer: Alon Albalak)
  • 05/30 Deep learning for NLP: Transformers.
  • 06/01 Deep learning for NLP: GPT Models
  • 06/06 Course review and class evaluation & HW4 due
  • 06/08 In-class final exam

Course Description

Have you ever used intelligent virtual assistants such as Google Now, Apple Siri, Amazon Alexa, Microsoft Cortana, or OpenAI ChatGPT? Wondering what are the technologies behind such systems? How did IBM's Watson beat top human Jeopardy players? Or you are just curious about how Google Translate works? Understanding human language is an important goal for Artificial Intelligence, and this course introduces fundamental theories and practical applications in Natural Language Processing (NLP). In particular, this course will focus on the design of basic machine learning algorithms (e.g., classification and structured prediction) for core NLP problems. The concentration of this course is on the mathematical, statistical and computational foundations for NLP.

Throughout the course, we will cover classic lexical, syntactic, and semantic processing topics in NLP, including language modeling, sentiment analysis, part-of-speech tagging, parsing, word sense disambiguation, distributional semantics, question answering, information extraction, and machine translation. The parallel theme on machine learning algorithms for NLP will focus on classic supervised learning, semi-supervised learning, and unsupervised learning models, including naive Bayes, logistic regression, hidden Markov models, maximum entropy Markov models, conditional random fields, feed-forward neural networks, recurrent neural networks, and convolutional neural networks. Throughout the course, we will study the implicit assumptions made in each of the machine learning models, and understand the pros and cons of these modern statistical tools for solving NLP problems. A key emphasis of this course is on empirical and statistical analysis of large text corpora, and distill useful structured knowledge from large collections of unstructured documents.

Text Book

No official text book is required for this class, but the following optional text book is recommended:

  • Speech and Language Processing (2nd ed.), Dan Jurafsky and James H. Martin.

The following website provides a free draft version of a new edition of this book. [1].

Grading

There will be four homework assignments (40%), one mid-term exam (20%), and a final exam (40%). Four late days are allowed with no penalty. After that 50% will be deducted if it is within 4 days after the due day, unless you have a note from the doctors' office. Homework assignment submissions that are five days late will receive zero credits. Your grade can be found on GauchoSpace.

Academic Integrity

We follow UCSB's academic integrity policy from UCSB Campus Regulations, Chapter VII:``Student Conduct and Discipline"):

  • It is expected that students attending the University of California understand and subscribe to the ideal of academic integrity, and are willing to bear individual responsibility for their work. Any work (written or otherwise) submitted to fulfill an academic requirement must represent a student’s original work. Any act of academic dishonesty, such as cheating or plagiarism, will subject a person to University disciplinary action. Using or attempting to use materials, information, study aids, or commercial “research” services not authorized by the instructor of the course constitutes cheating. Representing the words, ideas, or concepts of another person without appropriate attribution is plagiarism. Whenever another person’s written work is utilized, whether it be a single phrase or longer, quotation marks must be used and sources cited. Paraphrasing another’s work, i.e., borrowing the ideas or concepts and putting them into one’s “own” words, must also be acknowledged. Although a person’s state of mind and intention will be considered in determining the University response to an act of academic dishonesty, this in no way lessens the responsibility of the student.

More specifically, we follow Stefano Tessaro and William Cohen's policy in this class:

You cannot copy the code or answers to homework questions or exams from your classmates or from other sources; You may discuss course materials and assignments with your classmate, but you cannot write anything down. You must write down the answers / code independently. The presence or absence of any form of help or collaboration, whether given or received, must be explicitly stated and disclosed in full by all involved, on the first page of their assignment. Specifically, each assignment solution must start by answering the following questions:

  • (1) Did you receive any help whatsoever from anyone in solving this assignment? Yes / No.
    • If you answered 'yes', give full details: (e.g. ``Jane explained to me what is asked in Question 3.4")
  • (2) Did you give any help whatsoever to anyone in solving this assignment? Yes / No.
    • If you answered 'yes', give full details: (e.g. ``I pointed Joe to section 2.3 to help him with Question 2".
  • No electronics are allowed during exams, but you may prepare an A4-sized note and bring it the exam.
  • If you have questions, often ask the teaching staff.

Academic dishonesty will be reported to the highest line of command at UCSB. Students who engage in such activities will receive an F grade automatically.

Accessibility

Students with documented disability are asked to contact the DSP office to arrange the necessary academic accommodations.

Discussions

All discussions and questions should be posted on our course Piazza site.