20
Presenter: Jinhua Du ( 杜杜杜 ) Xi’an University of Technology 杜杜杜杜杜 西 NLP&CC, Chongqing, Nov. 17- 19, 2013 Discriminative Latent Variable Based Classifier for Translation Error Detection

Presenter: Jinhua Du ( 杜金华 ) Xi’an University of Technology 西安理工大学 NLP&CC, Chongqing, Nov. 17-19, 2013 Discriminative Latent Variable Based Classifier

Embed Size (px)

Citation preview

Presenter: Jinhua Du (杜金华 )

Xi’an University of Technology西安理工大学

NLP&CC, Chongqing, Nov. 17-19, 2013

Discriminative Latent Variable Based Classifier for Translation Error

Detection

[email protected]

Outline

1. Introduction

2. DPLVM for Translation Error Detection

3. Experiments and Analysis

4. Conclusions and Future Work

[email protected]

1. IntroductionProblem

1. In localization industry, human is always involved in post-editing the MT results;

2. MT errors always increase human cost to obtain a reasonable translation;

3. Translation error detection or word confidence estimation can improve working efficiency of post-editors in some extent.

1. In localization industry, human is always involved in post-editing the MT results;

2. MT errors always increase human cost to obtain a reasonable translation;

3. Translation error detection or word confidence estimation can improve working efficiency of post-editors in some extent.

•Research Question: how to improve the detection accuracy of detecting translation errors?

[email protected]

Blatz et al. combined the neuralnetwork and a naive Bayes classifier

2004

Ueffing and Ney exhaustively explored various kinds of WPP features

2003/2007

Specia et al. worked on confidence estimation in CAT field2009/20

11

Xiong et al. used a MaxEnt-based classifier to predict translation errors

2010

1. IntroductionRelated Work

[email protected]

For same feature set,

different classifiers show

different performance,

thus how to select/design a

proper classifier is

important

Classifiers

For a classifier, different

features reflect different

characteristics of problem,

how to select/design a

feature set is crucial

Features

1. IntroductionKey Factors

[email protected]

Title in hereFeature set

Title in here

Comparison withSVM andMaxEnt

Title in hereDiscriminative

Latent Variableclassifier

1. IntroductionOur Work

[email protected]

2. DPLVM Algorithm

Conditions: a sequence of observations x = {x1, x2,…, xm}

a sequence of labels y = {y1, y2,…, ym}

Assumption: a sequence of latent variables h = {h1, h2,…, hm}

Goal: to learn a mapping between x and y

Definition:( | , ) ( | , , ) ( | , )P P P

h

y x y h x h x (1)

[email protected]

Simplified Algorithm

Assumptions: the model is restricted to have disjoint sets of latent

variables associated with each class label; Each hhjj is a member in a set HHyyjj of possible latent variables

for the class label yyj j ; so sequences which have any will by definition

have

Equation (1) can be re-written as:

where

jj yh H

( | , ) 0P y x

1...

( | , ) ( | , )y ym

P P

h H H

y x h x (2)

exp ( , )( | , )

exp ( , )h

P

f h x

h xf h x (3)

[email protected]

Parameter Estimation

Decoding for test set:

Decoding algorithm: Sun and Tsujii (2009): a latent-dynamic inference (LDI)

method based on A* search and dynamic programming;

[email protected]

DPLVM in Translation Error Detection Task

Prerequisites: Types of errors can be classified; Each class has a specific label; The classification task can be regarded as a labelling task;

2 Classes of word label C: correct Good words label: c I: incorrect Bad words label: i

[email protected]

Feature Set

Word Posterior

Probabilities

• Fixed position based WPP• Flexible position based WPP• Word alignment based WPP

Lexical

Features

• Part of speech (POS)

• word entity

Syntactic

Features

• word links from LG parser

[email protected]

Feature Representation

[email protected]

3. Experiments and Analysis

Experimental Settings – SMT system

•Language pair: Chinese-English•Training set: NIST data set,3.4m•Devset: NIST MT 2006 current set•Testset: NIST MT 2005,2008 sets

SMT Performance

[email protected]

Experimental Settings for Error Detection Task

• Devset: translations of NIST MT-08

• Testset: translations of NIST MT-05• Annotation: TER to determine the true labels for words, 37.99% ratio of correct words for MT-08, 41.59% RCW for MT-05

Data Set and Data Annotation

Evaluation MetricsEvaluation Metrics

[email protected]

Comparison

(1) Classification Experiments based on

Individual Features

[email protected]

(2) Classification Experiment on Combined Features

[email protected]

Observations

• The name entities are prone to be wrongly classified

• The prepositions, conjunctions, auxiliary verbs and articles are easier to be wrongly classified

• The proportion of the notional words that are wrongly classified is relatively small

[email protected]

4. Conclusions and Future Work

Conclusions

Presents a new classifier - DPLVM-based classifier -for translation error detection

Introduces three different kinds of WPP features,three linguistic features

Compares the MaxEnt classifier, SVM classifier and our DPLVM classifier

The proposed classifier performs best compared to two other individual classifiers in terms of CER

[email protected]

introducing paraphrases to annotate the hypothesesintroducing paraphrases to annotate the hypotheses

introducing new useful features to further improve the detection capability

performing experiments on more language pairs to verify our proposed method.

4. Conclusions and Future Work

Future Work

[email protected]

Thanks for your attention!