Natural Language Processing (CSE 517): Predicate-Argument Semantics

Similar documents
Scalable Semantic Parsing with Partial Ontologies ACL 2015

Introduction to Natural Language Processing Phase 2: Question Answering

Introduction to WordNet, HowNet, FrameNet and ConceptNet

_GCPS_04_ELA_All_Domains (_GCPS_04_ELA_All_Domains)

Twenty-One. Unit. Target Language. Q-ty is almost as tall as Jane. Q-ty is almost as tall as Jane.

Sentence Processing. BCS 152 October

IS IT AN ADVERB? MORE WORDS THAT DESCRIBE

The ACL Anthology Network Corpus. University of Michigan

Introduction to NLP. Ruihong Huang Texas A&M University. Some slides adapted from slides by Dan Jurafsky, Luke Zettlemoyer, Ellen Riloff

- ENGLISH TEST - PRE-INTERMEDIATE 100 QUESTIONS / KEYS

TimeLine: Cross-Document Event Ordering SemEval Task 4. Manual Annotation Guidelines

LING/C SC 581: Advanced Computational Linguistics. Lecture Notes Feb 6th

Sentiment Aggregation using ConceptNet Ontology

Unit Grammar Item Page

Skill-Builders. Grades 4 5. Grammar & Usage. Writer Sarah Guare. Editorial Director Susan A. Blair. Project Manager Erica L.

Semantics. Philipp Koehn. 16 November 2017

Past Simple Questions

Using Commas. c. Common introductory words that should be followed by a comma include yes, however, well.

Construal. Subjectivity/objectivity. To what extent are S or H regarded as objects of conception?

Upper Intermediate AK

Two Styles of Construction Grammar Do Ditransitives

Natural Language Processing

S. 2 English Revision Exercises. Unit 1 Basic English Sentence Patterns

Ontology and Taxonomy. Computational Linguistics Emory University Jinho D. Choi

Anglia ESOL International Examinations. Preliminary Level (A1) Paper CC115 W1 [5] W3 [10] W2 [10]

Tamar Sovran Scientific work 1. The study of meaning My work focuses on the study of meaning and meaning relations. I am interested in the duality of

Lecture 13: Chapter 10: Semantics

Subjects & Predicates. Project LA Activity

Song Lessons Understanding and Using English Grammar, 3rd Edition. A lesson about adjective, adverb, and noun clauses (Chapters 12, 13, 17)

Metonymy Determining the Type of the Direct Object

English Language Lesson two Dr. S. Fiala

Teacher-of-English.com

Semantic Tuples for Evaluation of Image to Sentence Generation

Practice Midterm Exam for Natural Language Processing

Developmental Sets. 1. Set I: (Spanish speaker)

TABLE OF CONTENTS. Free resource from Commercial redistribution prohibited. Language Smarts TM Level D.

LEVEL B Week 10-Weekend Homework

The Cognitive Nature of Metonymy and Its Implications for English Vocabulary Teaching

Keeping your reader involved can be an intricate dance with many different steps. Good writers. Creating Sentence Variety LESSON SUMMARY

LESSON 26: DEPENDENT CLAUSES (ADVERB)

Part A. New Inside Out Beginner Units Tests. Vocabulary. Food, drink and sport. Colours. Adjectives. 1 Write food, drink or sport.

Semantic Analysis in Language Technology

CS 562: STATISTICAL NATURAL LANGUAGE PROCESSING

Class 5: Language processing over a noisy channel. Ted Gibson 9.59J/24.905J

Mental Spaces, Conceptual Distance, and Simulation: Looks/Seems/Sounds Like Constructions in English

Conjunctions ******* There are several types of conjunctions in English grammar. They are:

1. Use commas to separate independent clauses when they are joined by any of these seven coordinating conjunctions: and, but, for, or, nor, so, yet.

Compare and Contrast Fables

Complements Review. Sentence Patterns (varying the simple S + V pattern)

Contents. pg pg pg Countable, Uncountable Nouns. pg pg pg pg pg Practice Test 1. pg.

Adverbs of manner. LEVEL NUMBER LANGUAGE Beginner A2_2067G_EN English

YOU ARE WHAT YOU LIKE INFORMATION LEAKAGE THROUGH USERS INTERESTS

c Complete the dialogue.

Language and Inference

Metonymy in Grammar: Word-formation. Laura A. Janda Universitetet i Tromsø

SAMPLE. Grammar, punctuation and spelling. Paper 1: short answer questions. English tests KEY STAGE LEVELS. First name. Middle name.

INSTITUTO POLITÉCNICO NACIONAL CECYT 8 NARCISO BASSOLS ETS ENGLISH GUIDE EXAM ENGLISH III

6 th Grade ELA Post-Test Study Guide Semester One

Contents Starter Unit 1 Unit 2 Unit 3 Review 1 Cross-curricular 1: Math Unit 4 Unit 5 Unit 6 Review 2 Cross-curricular 2: Language Arts Unit 7

Rhetorical Questions and Scales

Show Me Actions. Word List. Celebrating. are I can t tell who you are. blow Blow out the candles on your cake.

Metonymy Research in Cognitive Linguistics. LUO Rui-feng

SESSION 4 PAST CONTINUOUS AND PRESENT PERFECT CONTINUOUS IN CONVERSATIONS

Reading On The Move. Reasoning and Logic

Introduction to NLP. Ruihong Huang Texas A&M University. Some slides adapted from slides by Dan Jurafsky, Luke Zettlemoyer, Ellen Riloff

The structure of this ppt. Structural and categorial (and some functional) issues: English Hungarian

Present perfect and simple past. LEVEL NUMBER LANGUAGE Beginner A2_2043G_EN English

English Skills Practice and Apply: Grade 5

Clusters and Correspondences. A comparison of two exploratory statistical techniques for semantic description

Independent Clause. An independent clause is a group of words that has a subject and a verb that expresses a complete thought and can stand by itself.

ADVERBS MODIFYING VERBS

Level 1 Lesson 8 EDIFY CAMBODIA. EDIFY CAMBODIA LEVEL 1 LESSON 8 Page 1

Infinitives. Grammar Explanation. Causative Verbs

Probabilistic Grammars for Music

BBLAN24500 Angol mondattan szem. / English Syntax seminar BBK What are the Hungarian equivalents of the following linguistic terms?

3 rd CSE Unit 1. mustn t and have to. should and must. 1 Write sentences about the signs. 1. You mustn t smoke

CIDOC CRM A High Level Overview of the Model. George Bruseker ICS-FORTH CIDOC 2017 Tblisi, Georgia 25/09/2017

ii) Are we writing in French?. iii) Is there a book under the chair? iv) Is the house in front of them?

Second Grade ELA Test Second Nine- Week Study Guide

Name Date Class. Andrea Kittelson The Newcomer Notebook All rights reserved.

Sarcasm Detection in Text: Design Document

Introductory Tutorial: Recognizing Verbs and Subjects

WARM-UP. Grab one playing board. Grab two buttons (not matching) and a penny. Flip the penny to move Heads=1 space, Tails=2 spaces

Foundations in Data Semantics. Chapter 4

Re-appraising the role of alternations in construction grammar: the case of the conative construction

ABSTRACT CITATION HANDLING: PROCESSING CITATION TEXTS IN SCIENTIFIC DOCUMENTS. Michael Alan Whidby Master of Science, 2012

Word Word Class English Meaning audience noun people who are watching.

Quiz 4 Practice. I. Writing Narrative Essay. Write a few sentences to accurately answer these questions.

ENGLISH FILE. Progress Test Files Complete the sentences with the correct form of the. 3 Underline the correct word or phrase.

Linguistics 104 Language and conceptualization

A baseball uniform is clothing 1. baseball players wear during games. They mostly do 2. to show that their job is 3. the sport, baseball.

An HPSG Account of Depictive Secondary Predicates and Free Adjuncts: A Problem for the Adjuncts-as-Complements Approach

Chapter 9: Semantics. LANE 321 Content adapted from Yule (2010) Copyright 2014 Haifa Alroqi

CAS LX 522 Syntax I. Small clauses. Small clauses vs. infinitival complements. To be or not to be. Small clauses. To be or not to be

MODIFICATION NOUN MODIFIERS

Dramatic Level Analysis for Interactive Narrative

Your Sentiment Precedes You: Using an author s historical tweets to predict sarcasm

SENTENCE TYPES. Subject verb. 2. Most comedians use personal situations as a source of humour.

Sentences and prediction Jonathan R. Brennan. Introduction to Neurolinguistics, LSA2017 1

6 th Grade ELA Post-Test Study Guide Semester One

Transcription:

Natural Language Processing (CSE 517): Predicate-Argument Semantics Noah Smith c 2016 University of Washington nasmith@cs.washington.edu February 29, 2016 1 / 61

Semantics vs. Syntax Syntactic theories and representations focus on the question of which strings in V are in the language. Semantics is about understanding what a string in V means. Sidestepping a lengthy and philosophical discussion of what meaning is, we ll consider two meaning representations: Predicate-argument structures, also known as event frames (today) Truth conditions represented in first-order logic (Wednesday) 2 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. 3 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. 4 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. 5 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. 6 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. In this buying/purchasing event/situation, Warren played the role of the buyer, and there was some stock that played the role of the thing purchased. 7 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. In this buying/purchasing event/situation, Warren played the role of the buyer, and there was some stock that played the role of the thing purchased. Also, there was presumably a seller, only mentioned in one example. 8 / 61

Motivating Example: Who did What to Who(m)? Warren bought the stock. They sold the stock to Warren. The stock was bought by Warren. The purchase of the stock by Warren surprised no one. Warren s stock purchase surprised no one. In this buying/purchasing event/situation, Warren played the role of the buyer, and there was some stock that played the role of the thing purchased. Also, there was presumably a seller, only mentioned in one example. In some examples, a separate event involving surprise did not occur. 9 / 61

Semantic Roles: Breaking Jesse broke the window. The window broke. Jesse is always breaking things. The broken window testified to Jesse s malfeasance. 10 / 61

Semantic Roles: Breaking Jesse broke the window. The window broke.? Jesse is always breaking things. The broken window testified to Jesse s malfeasance. A breaking event has a Breaker and a Breakee. 11 / 61

Semantic Roles: Eating Eat! We ate dinner. We already ate. The pies were eaten up quickly. Our gluttony was complete. 12 / 61

Semantic Roles: Eating Eat! (you, listener)? We ate dinner. We already ate.? The pies were eaten up quickly.? Our gluttony was complete.? An eating event has an Eater and Food, neither of which needs to be mentioned explicitly. 13 / 61

Abstraction? Breaker? = Eater 14 / 61

Abstraction? Breaker? = Eater Both are actors that have some causal responsibility for changes in the world around them. 15 / 61

Abstraction? Breaker? = Eater Both are actors that have some causal responsibility for changes in the world around them. Breakee? = Food 16 / 61

Abstraction? Breaker? = Eater Both are actors that have some causal responsibility for changes in the world around them. Breakee? = Food Both are greatly affected by the event, which happened to them. 17 / 61

Thematic Roles (Jurafsky and Martin, 2015, with modifications) Agent Experiencer Force Theme Result Content Instrument Beneficiary The waiter spilled the soup. John has a headache. The wind blows debris from the mall into our yards. Jesse broke the window The city built a regulation-size baseball diamond. Mona asked, You met Mary Ann at a supermarket? He poached catfish, stunning them with a shocking device. Ann Callahan makes hotel reservations for her boss. Source I flew in from Boston. Goal I drove to Portland. 18 / 61

Verb Alternation Examples: Breaking and Giving Breaking: Agent/subject; Theme/object; Instrument/PP with Instrument/subject; Theme/object Theme/subject Giving: Agent/subject; Beneficiary/object; Theme/second-object Agent/subject; Theme/object; Beneficiary/PP to Levin (1993) codified English verbs into classes that share patterns (e.g., verbs of throwing: throw/kick/pass). 19 / 61

Remarks Fillmore (1968), among others, argued for semantic roles in linguistics. 20 / 61

Remarks Fillmore (1968), among others, argued for semantic roles in linguistics. By now, it should be clear that the expressiveness of NL (at least English) makes semantic analysis rather distinct from syntax. 21 / 61

Remarks Fillmore (1968), among others, argued for semantic roles in linguistics. By now, it should be clear that the expressiveness of NL (at least English) makes semantic analysis rather distinct from syntax. General challenges to analyzing semantic roles: What are the predicates/events/frames/situations? What are the roles/participants for each one? What algorithms can accurately identify and label all of them? 22 / 61

Semantic Role Labeling Input: a sentence x Output: A collection of predicates, each consisting of: a label, sometimes called the frame a span a set of arguments, each consisting of: a label, usually called the role a span In principle, spans might have gaps, though in most conventions they usually do not. 23 / 61

The Importance of Lexicons Like syntax, any annotated dataset is the product of extensive development of conventions. Many conventions are specific to particular words, and this information is codified in structured objects called lexicons. You should think of every semantically annotated dataset as both the data and the lexicon. We consider two examples. 24 / 61

PropBank (Palmer et al., 2005) Frames are verb senses (later extended, though) Lexicon maps verb-sense-specific roles onto a small set of abstract roles (e.g., Arg0, Arg1, etc.) Annotated on top of the Penn Treebank, so that arguments are always constituents. 25 / 61

fall.01 (move downward) arg1: logical subject, patient, thing falling arg2: extent, amount fallen arg3: starting point arg4: ending point argm-loc: medium Sales fell to $251.2 million from $278.8 million. The average junk bond fell by 4.2%. The meteor fell through the atmosphere, crashing into Palo Alto. 26 / 61

fall.01 (move downward) arg1: logical subject, patient, thing falling arg2: extent, amount fallen arg3: starting point arg4: ending point argm-loc: medium Sales fell to $251.2 million from $278.8 million. The average junk bond fell by 4.2%. The meteor fell through the atmosphere, crashing into Palo Alto. 27 / 61

fall.01 (move downward) arg1: logical subject, patient, thing falling arg2: extent, amount fallen arg3: starting point arg4: ending point argm-loc: medium Sales fell to $251.2 million from $278.8 million. The average junk bond fell by 4.2%. The meteor fell through the atmosphere, crashing into Palo Alto. 28 / 61

fall.01 (move downward) arg1: logical subject, patient, thing falling arg2: extent, amount fallen arg3: starting point arg4: ending point argm-loc: medium Sales fell to $251.2 million from $278.8 million. The average junk bond fell by 4.2%. The meteor fell through the atmosphere, crashing into Palo Alto. 29 / 61

fall.01 (move downward) arg1: logical subject, patient, thing falling arg2: extent, amount fallen arg3: starting point arg4: ending point argm-loc: medium Sales fell to $251.2 million from $278.8 million. The average junk bond fell by 4.2%. The meteor fell through the atmosphere, crashing into Palo Alto. 30 / 61

fall.01 (move downward) arg1: logical subject, patient, thing falling arg2: extent, amount fallen arg3: starting point arg4: ending point argm-loc: medium Sales fell to $251.2 million from $278.8 million. The average junk bond fell by 4.2%. The meteor fell through the atmosphere, crashing into Palo Alto. 31 / 61

fall.08 (fall back, rely on in emergency) arg0: thing falling back arg1: thing fallen back on World Bank president Paul Wolfowitz has fallen back on his last resort. 32 / 61

fall.08 (fall back, rely on in emergency) arg0: thing falling back arg1: thing fallen back on World Bank president Paul Wolfowitz has fallen back on his last resort. 33 / 61

fall.08 (fall back, rely on in emergency) arg0: thing falling back arg1: thing fallen back on World Bank president Paul Wolfowitz has fallen back on his last resort. 34 / 61

fall.10 (fall for a trick; be fooled by) arg1: the fool arg2: the trick Many people keep falling for the idea that lowering taxes on the rich benefits everyone. 35 / 61

fall.10 (fall for a trick; be fooled by) arg1: the fool arg2: the trick Many people keep falling for the idea that lowering taxes on the rich benefits everyone. 36 / 61

fall.10 (fall for a trick; be fooled by) arg1: the fool arg2: the trick Many people keep falling for the idea that lowering taxes on the rich benefits everyone. 37 / 61

FrameNet (Baker et al., 1998) Frames can be any content word (verb, noun, adjective, adverb) About 1,000 frames, each with its own roles Both frames and roles are hierarchically organized Annotated without syntax, so that arguments can be anything https://framenet.icsi.berkeley.edu 38 / 61

change position on a scale Item: entity that has a position on the scale Attribute: scalar property that the Item possesses Difference: distance by which an Item changes its position Final state: Item s state after the change Final value: position on the scale where Item ends up Initial state: Item s state before the change Initial value: position on the scale from which the Item moves Value range: portion of the scale along which values of Attribute fluctuate Duration: length of time over which the change occurs Speed: rate of change of the value Group: the group in which an Item changes the value of an Attribute 39 / 61

FrameNet Example Attacks on civilians decreased over the last four months change position on a scale Item Duration The Attribute is left unfilled but is understood from context (i.e., frequency ). 40 / 61

change position on a scale Verbs: advance, climb, decline, decrease, diminish, dip, double, drop, dwindle, edge, explode, fall, fluctuate, gain, grow, increase, jump, move, mushroom, plummet, reach, rise, rocket, shift, skyrocket, slide, soar, swell, swing, triple, tumble Nouns: decline, decrease, escalation, explosion, fall, fluctuation, gain, growth, hike, increase, rise, shift, tumble Adverb: increasingly 41 / 61

change position on a scale event birth scenario... change position on a scale... waking up change of temperature proliferating in number (birth scenario also inherits from sexual reproduction scenario.) 42 / 61

Semantic Role Labeling Tasks The paper that started it all: Gildea and Jurafsky (2002) used FrameNet lexicon (which includes prototypes, not really a corpus). When FrameNet started releasing corpora, the task was reformulated. Example open-source system: SEMAFOR (Das et al., 2014). 43 / 61

Semantic Role Labeling Tasks The paper that started it all: Gildea and Jurafsky (2002) used FrameNet lexicon (which includes prototypes, not really a corpus). When FrameNet started releasing corpora, the task was reformulated. Example open-source system: SEMAFOR (Das et al., 2014). The PropBank corpus is used directly for training/testing. 44 / 61

Semantic Role Labeling Tasks The paper that started it all: Gildea and Jurafsky (2002) used FrameNet lexicon (which includes prototypes, not really a corpus). When FrameNet started releasing corpora, the task was reformulated. Example open-source system: SEMAFOR (Das et al., 2014). The PropBank corpus is used directly for training/testing. Conference on Computational Natural Language Learning (CoNLL) shared task in 2004, 2005, 2008, 2009, all PropBank-based. In 2008 and 2009, the task was cast as a kind of dependency parsing. In 2009, seven languages were included in the task. 45 / 61

Methods Boils down to labeling spans (with frames and roles). It s mostly about features. 46 / 61

Example: Path Features S NP-SBJ VP DT NNP NNP NNP The San Francisco Examiner VBD NP PP-TMP issued DT JJ NN IN NN NP-TMP a special edition around noon NN yesterday 47 / 61

Example: Path Features S NP-SBJ VP DT NNP NNP NNP The San Francisco Examiner VBD NP PP-TMP issued DT JJ NN IN NN NP-TMP a special edition around noon NN yesterday Path from NP-SBJ The San Francisco Examiner to issued: NP S VP VBD 48 / 61

Example: Path Features S NP-SBJ VP DT NNP NNP NNP The San Francisco Examiner VBD NP PP-TMP issued DT JJ NN IN NN NP-TMP a special edition around noon NN yesterday Path from NP a special edition to issued: NP VP VBD 49 / 61

Methods: Beyond Features The span-labeling decisions interact a lot! Presence of a frame increases the expectation of certain roles Roles for the same predicate shouldn t overlap Some roles are mutually exclusive or require each other (e.g., resemble ) 50 / 61

Methods: Beyond Features The span-labeling decisions interact a lot! Presence of a frame increases the expectation of certain roles Roles for the same predicate shouldn t overlap Some roles are mutually exclusive or require each other (e.g., resemble ) Ensuring well-formed outputs: Using syntax as a scaffold allows efficient prediction; you re essentially labeling the parse tree (Toutanova et al., 2008). Others have formulated the problem as constrained, discrete optimization (Punyakanok et al., 2008). Also greedy methods (Björkelund et al., 2010) and joint methods for syntactic and semantic dependencies (Henderson et al., 2013). 51 / 61

Methods: Beyond Features The span-labeling decisions interact a lot! Presence of a frame increases the expectation of certain roles Roles for the same predicate shouldn t overlap Some roles are mutually exclusive or require each other (e.g., resemble ) Ensuring well-formed outputs: Using syntax as a scaffold allows efficient prediction; you re essentially labeling the parse tree (Toutanova et al., 2008). Others have formulated the problem as constrained, discrete optimization (Punyakanok et al., 2008). Also greedy methods (Björkelund et al., 2010) and joint methods for syntactic and semantic dependencies (Henderson et al., 2013). Current work: Some recent attempts to merge FrameNet and PropBank have shown promise (FitzGerald et al., 2015; Kshirsagar et al., 2015) 52 / 61

Related Problems in Relational Semantics Coreference resolution: which mentions (within or across texts) refer to the same entity or event? Entity linking: ground such mentions in a structured knowledge base (e.g., Wikipedia) Relation extraction: characterize the relation among specific mentions Information extraction: transform text into a structured knowledge representation Classical IE starts with a predefined schema Open IE includes the automatic construction of the schema; see http://ai.cs.washington.edu/projects/ open-information-extraction 53 / 61

General Remarks Criticisms of semantic role labeling: Semantic roles are just syntax++ since they don t allow much in the way of reasoning (e.g., question answering). 54 / 61

General Remarks Criticisms of semantic role labeling: Semantic roles are just syntax++ since they don t allow much in the way of reasoning (e.g., question answering). Lexicon building is slow and requires expensive expertise. Can we do this for every (sub)language? 55 / 61

General Remarks Criticisms of semantic role labeling: Semantic roles are just syntax++ since they don t allow much in the way of reasoning (e.g., question answering). Lexicon building is slow and requires expensive expertise. Can we do this for every (sub)language? We ve now had a taste of two branches of semantics: Lexical semantics (e.g., supersense tagging) Relational semantics (e.g., semantic role labeling) 56 / 61

General Remarks Criticisms of semantic role labeling: Semantic roles are just syntax++ since they don t allow much in the way of reasoning (e.g., question answering). Lexicon building is slow and requires expensive expertise. Can we do this for every (sub)language? We ve now had a taste of two branches of semantics: Lexical semantics (e.g., supersense tagging) Relational semantics (e.g., semantic role labeling) Next up, a third: Compositional semantics 57 / 61

If time... Acknowledgment: Nathan Schneider dragonfly conveyor belt finger food anteater brain teaser C++ code leather belt birthday Batman firehose fish food steel wool jazz musician staple remover fisheye Cookie Monster Spanish teacher computer science student teacher U.S. Constitution Facebook status coffee cake iron fist Toy Story glue gun baby food Labor Day thesis supervisor flyswatter dawn raid paper clip surge protector project team spaghetti monster tomato sauce string orchestra rubber duck piano key toothbrush heartburn Shannon entropy elevator button Your job is to group these into categories and explain those categories to the class; focus on the semantic relationship between the two nouns in each compound. You may wish to think of other compounds to help make your case. 58 / 61

Readings and Reminders Jurafsky and Martin (2015) Assignment 4 is due Wednesday. Submit a suggestion for an exam question by Friday at 5pm. Your project is due March 9. 59 / 61

References I Collin F. Baker, Charles J. Fillmore, and John B. Lowe. The Berkeley FrameNet project. In Proc. of ACL-COLING, 1998. Anders Björkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. A high-performance syntactic and semantic dependency parser. In Proc. of COLING, 2010. Dipanjan Das, Desai Chen, André F. T. Martins, Nathan Schneider, and Noah A. Smith. Frame-semantic parsing. Computational Linguistics, 40(1):9 56, 2014. Charles J. Fillmore. The case for case. In Bach and Harms, editors, Universals in Linguistic Theory. Holt, Rinehart, and Winston, 1968. Nicholas FitzGerald, Oscar Täckström, Kuzman Ganchev, and Dipanjan Das. Semantic role labeling with neural network factors. In Proc. of EMNLP, 2015. Daniel Gildea and Daniel Jurafsky. Automatic labeling of semantic roles. Computational Linguistics, 24(3):245 288, 2002. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics, 39(4):949 998, 2013. Daniel Jurafsky and James H. Martin. Semantic role labeling (draft chapter), 2015. URL https://web.stanford.edu/~jurafsky/slp3/22.pdf. Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer. Frame-semantic role labeling with heterogeneous annotations. In Proc. of ACL, 2015. 60 / 61

References II Beth Levin. English verb classes and alternations: A preliminary investigation. University of Chicago Press, 1993. Martha Palmer, Daniel Gildea, and Paul Kingsbury. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71 105, 2005. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257 287, 2008. Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. A global joint model for semantic role labeling. Computational Linguistics, 34(2):161 191, 2008. 61 / 61