其他摘要:Detecting aspectual properties of clauses in the form of semantic clause types has been shown to depend on a combination of syntactic-semantic and contextual features. We explore this task in a deep-learning framework, where tuned word representations capture linguistic features. We introduce an attention mechanism that pinpoints relevant context information. Our model implicitly captures task-relevant features and avoids the need to reproduce explicit linguistic features for other languages.We present experiments for English and German that achieve competitive performance, and analyze the outputs of our systems from a linguistic point of view. We present a novel take on modeling and exploiting genre information and showcase the adaptation of our system from one language to another.