Upload
mapr-technologies
View
98
Download
3
Embed Size (px)
Citation preview
Mining Transactional DataTed Dunning - 2004
Outline
● What are LLR tests?– What value have they shown?
● What are transactional values?– How can we define LLR tests for them?
● How can these methods be applied?– Modeling architecture examples
● How new is this?
Log-likelihood Ratio Tests
● Theorem due to Chernoff showed that generalized log-likelihood ratio is asymptotically 2 distributed in many useful cases
● Most well known statistical tests are either approximately or exactly LLR tests– Includes z-test, F-test, t-test, Pearson's 2
● Pearson's 2 is an approximation valid for large expected counts ... G2 is the exact form for multinomial contingency tables
Mathematical Definition
● Ratio of maximum likelihood under the null hypothesis to the unrestricted maximum likelihood
=max∈0
l X ∣
max∈
l X ∣
d.o.f.=dim −dim 0
● -2 log is asymptotically 2 distributed
Comparison of Two Observations
● Two independent observations, X1 and X
2 can be
compared to determine whether they are from the same distribution
=max∈
l X 1∣ l X 2∣
max1∈ ,2∈
l X 1∣1 l X 2∣2
d.o.f.=dim
1 ,2 ∈ ×
History of LLR Tests for “Text”
● Statistics of Surprise and Coincidence● Genomic QA tools● Luduan● HNC text-mining, preference mining● MusicMatch recommendation engine
How Useful is LLR?
● A test in 1997 showed that a query construction system using LLR (Luduan) decreased the error rate of the best document routing system (Inquery) by approximately 5x at 10% recall and nearly 2x at 20% recall
● Language and species ID programs showed similar improvements versus state of the art
● Previously unsuspected structure around intron splice sites was discovered using LLR tests
TREC Document Routing Results
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Luduan vs Inquery
InqueryLuduanConvectis
Recall
Pre
cisi
on
What are Transactional Variables?
● A transactional sequence is a sequence of transactions.
● Transactions are instances of a symbol and (optionally) a time and an amount:
Z= z1 ... zN z i= i , t i , xi i∈ , an alphabet of symbolst i , x i∈ℝ
Example - Text
● A textual document is a transactional sequence without times or amounts
Z=1 ...N i∈
Example – Traffic Violation History
● A history of traffic violations is a (hopefully empty) sequence of violation types and associated dates (times)
Z= z1 ... zN z i= i , t i i∈{stop-sign ,speeding , DUI , ...}t i∈ℝ
Example – Speech Transcript
● A conversation between a and b can be rendered as a transactions containing words spoken by either a or b at particular times:
Z= z1 ... zN z i= i , t i i∈{a ,b}×t i∈ℝ
Example – Financial History
● A credit card history can be viewed as a transactional sequence with merchant code, date (=time) and amount:
Z= z1 ... z N z i=⟨ i , t i , xi⟩ i∈t i∈ℝ
9/03/03 9/04/03 9/07/03 9/10/03 9/23/0310/03/0310/09/0310/17/0310/24/03
Cash AdvanceGroceriesFuelGroceriesDepartment StorePaymentHotel & MotelRental CarsLufthansa
$300 79 21 42 173-600 104 201 838
Proposed Evolution
Text
Luduan, etc
LLR tests
LLR testsAugmented
Data
TransactionalData
DataAugmentation
TransactionMining
LLR for Transaction Sequence
● Assuming reasonable interactions between timing, symbol selection and amount distribution, LLR test can be decomposed
● Two major terms remain, one for symbols and timing together, one for amounts
LLR=LLR symbols & timingLLR amounts
Anecdotal Observations
● Symbol selection often looks multinomial, or (rarely) Markov
● Timing is often nearly Poisson (but rate depends on which symbol)
● Distribution of amount appears to depend on symbol, but generally not on inter-transaction timing. Mixed discrete/continuous distributions are common in financial settings
Transaction Sequence Distributions
● Mixed Poisson distributions give desired symbol/timing behavior
● Amount distribution depends on symbol
p Z =∏∈
T k e− T
k!∏
i=1. .. Np xi∣ i
p Z =[N !∏∈
k
k! ] [T N e−T
N ! ] ∏i=1. .. N
p xi∣ i
= , ∑∈
=1
LLR for Multinomial
● Easily expressed as entropy of contingency table
[k 11 k 12 ... k 1 nk 21 k 22 ... k 2 n⋮ ⋮ ⋱ ⋮k m1 k m2 ... k mn
] k 1*
k 2*
⋮k m*
k *1 k *2 ... k * n k **
−2 log=2 N ∑ijij logij−∑
ii * logi *−∑
j* j log* j
log=∑ij
k ij logk ij
k i *
k **
k * j=∑
ijk ij log
ij
* jd.o.f.=m−1n−1
LLR for Poisson Mixture
● Easily expressed using timed contingency table
[k 11 k 12 ... k 1 n
k 21 k 22 ... k 2 n
⋮ ⋮ ⋱ ⋮k m1 k m2 ... k mn
∣ t1
t2
⋮tm]
k *1 k *2 ... k * n ∣ t*
log=∑ij
k ij logk ij
t i
t*
k * j=∑
ijk ij log
ij
* j
d.o.f.=m−1n
LLR for Normal Distribution
p x∣ ,= 12
e−x−2
22
=∑ xi
N=∑ xi−2
N
−2 log=2N 1 log1
N 2 log2
d.o.f.=2
● Assume X1 and X
2 are normally distributed
● Null hypothesis of identical mean and variance
Calculations
p x∣ ,= 12
e−x−2
22
=∑
ixi
N=∑i
x−2
Nlog p X 1∣ ,log p X 1∣ ,−log p X 1∣1,1−log p X 2∣2,2=
− ∑i=1. . N 1
[log 2logx1 i−2
22 ]− ∑i=1. . N 2
[log 2logx2 i−2
22 ] ∑
i=1. . N 1[log 2log1
x1 i−12
212 ] ∑
i=1. . N 2[log 2log2
x2 i−22
222 ]
−2 log=2N 1 log1
N 2 log2
d.o.f.=2
● Assume X1 and X
2 are normally distributed
● Null hypothesis of identical mean and variance
Transactional Data in Context
1.234 years male
Real-world input often consists of one or more bags of transactional valuescombined with an assortment of conventional numerical or categorial values.
Extracting information from the transactional data can be difficult and is often, therefore, not done.
Real World Target Variables
Labeled as Red
Mislabeled Instances
SecondaryLabels
a
b
Luduan Modeling Methodology
● Use LLR tests to find exemplars (query terms) from secondary label sets
● Create positive and negative secondary label models for each class of transactional data
● Cluster using output of all secondary label models and all conventional data
● Test clusters for stability ● Use distance cluster centroids and/or secondary
label models as derived input variables
Example #1- Auto Insurance
● Predict probability of attrition and loss for auto insurance customers
● Transactional variables include– Claim history– Traffic violation history– Geographical code of residence(s)– Vehicles owned
● Observed attrition and loss define past behavior
Derived Variables
● Split training data according to observable classes– These include attrition and loss > 0
● Define LLR variables for each class/variable combination
● These 2 m v derived variables can be used for clustering (spectral, k-means, neural gas ...)
● Proximity in LLR space to clusters are the new modeling variables
Results
● Conventional NN modeling by competent analyst was able to explain 2% of variance – No significant difference on training/test data
● Models built using Luduan based cluster proximity variables were able to explain 70% of variance (KS approximately 0.4)– No significant difference on training/test data
Example #2 – Fraud Detection
● Predict probability that an account is likely to result in charge-off due to payment fraud
● Transactional variables include– Zip code– Recent payments and charges– Recent non-monetary transactions
● Bad payments, charge-off, delinquency are observable behavioral outcomes
Derived Variables
● Split training data according to observable classes (charge-off, NSF payment, delinquency)
● Define LLR variables for each class/variable combination
● These 2 m v derived variables can be used directly as model variables
● No results available for publication
Example #3 – E-commerce monitor
● Detect malfunctions or changes in behavior of e-commerce system due to fraud or system failure
● Transaction variables include (time, SKU, amount)
● Desired output is alarm for operational staff
Derived Variables
● Time warp derived as product of smoothed daily and weekly sales rates
● Time warp updated monthly to account for seasonal variations
● Warped time used in transactions● Warped time since last transaction ≈ LLR in
single product/single price case● Full LLR allows testing for significant difference
in Champion/Challenger e-commerce optimizer
Transductive Derived Variables
● All objective segmentations of data provide new LLR variables
● Cross product of model outputs versus objective segmentation provide additional LLR variables for second level model derivation
● Comparable to Luduan query construction technique – TREC pooled evaluation technique provided cross product of relevance versus perceived relevance
Relationship To Risk Tables
● Risk tables are estimate of relative risk for each value of a single symbolic variable– Useful with variables such as post-code of primary
residence– Ad hoc smoothing used to deal with small counts
● Not usually applied to symbol sequences● Risk tables ignore time entirely● Risk tables require considerable analyst finesse
Relationship to Known Techniques
● Clock-tick symbols– Time-embedded symbols viewed as sequences of
symbols along with “ticks” that occur at fixed time intervals
– Allows multinomial LLR as poor man's mixed Poisson LLR
● Not a well known technique, not used in production models
● Difficulties in choosing time resolution and counting period
Conclusions
● Theoretical properties of transaction variables are well defined
● Similarities to known techniques indicates low probability of gross failure
● Similarity to Luduan techniques suggests high probability of superlative performance
● Transactional LLR statistics define similarity metrics useful for clustering