FANDOM


This article concerns SemLM models of Peng and Roth (2016)[1].

Formalization Edit

The language models try to capture sequences of frames, arguments and discourse markers. There are two formalizations:

  • Frame-chain (FC) SemLM: [f1, dis1, f2, o, f3, o, dis2, . . . , o]
  • Entity-centered (EC) SemLM: [fa1, dis1, fa2, fa3, dis2, . . .]

Frames and arguments Edit

FrameNet mapping: The authors use both FrameNet and PropBank frames. They explain the use of FrameNet as "achieving a higher level of abstraction." PropBank frames are used when a mapping to FrameNet is not available.

The authors didn't study the impact of this on coreference resolution but they did it for perplexity and narrative cloze. Removing FrameNet mapping increases perplexity (Table 3) and decreases MRR and Recall@30 (Table 4).

Augmenting: This is important to capture the true meaning of a predicate. "1) if a preposition immediately follows a predicate, we append the preposition to the predicate e.g. “take over”; 2) if we encounter the semantic role label AM-PRD which indicates a secondary predicate, we also ap- pend this secondary predicate to the main predicate e.g. “be happy”; 3) if we see the semantic role label AM-NEG which indicates negation, we append “not” to the predicate e.g. “not like”. These three augmentations can co-exist and they allowus to model more fine-grained semantic frames."

Compound frames (verb compounds): IMHO, this is an important invention of Peng and Roth. For example, in the simple sentence: "He wants to see me", a naive analysis would generate two frames: want (A0=he, A1=to see me), see (A0=he, A1=me). This analysis may be correct in SRL theory but is not so informative: he hasn't seen me yet and pretending so will create wrong features, "want" carries very little meaning and "to see me" is a complex expression that resists NLP analysis so far.

The authors state: "We apply the rule that if the gap between two predicates is less than two tokens, we treat them as a unified semantic frame defined by the conjunction of the two (augmented) semantic frames, e.g. “eat.01-drink.01” and “decide.01-buy.01”."

The number of compounds are substantial: half of single frames for FC and a third for EC (Table 2).Why so? How do they determine the arguments of compound frames? The authors don't include linking prepositions and conjunctions in the representation of compound frames.Are they important? How many unique compounds will they add? The authors didn't report effect of compounds on performance.How's it like?

Looks like arguments are concatenated to frames: "we denote fa = f#Arg when referring to an argument role label (Arg) inside a frame (f)." fa's are used for EC only (actually, they are well-defined for EC only). Interestingly, the number of single frames in EC is smaller than FC.Is it because so many frames aren't attached to an interesting entity?

Discourse markers Edit

An ablation study was carried out in the context of coreference resolution. Removing discourse markers reduces performance by 0.15-0.46% (Table 5).

Frame-chain SemLM Edit

Entity-centered SemLM Edit

Training Edit

Log-bilinear model (LB) Edit

See Log-bilinear models
Everything the authors say: "We use the OxLM toolkit (Paul et al., 2014) with Noise-Constrastive Estimation (Gutmann and Hyvarinen, 2010) for the LB model. We set the context window size to 5 and produce 150- dimension embeddings."

We don't know if they use a cut-off frequency or not.

References Edit

  1. Peng, H., & Roth, D. (2016). Two Discourse Driven Language Models for Semantics. ACL 2016, 290–300.