286
Advanced Quantitative Research Methodology, Lecture Notes: Matching Methods for Causal Inference 1 Gary King 2 Institute for Quantitative Social Science Harvard University 1 c Copyright 2015 Gary King, All Rights Reserved. 2 GaryKing.org. 1 / 44

Advanced Quantitative Research Methodology, Lecture Notes · Advanced Quantitative Research Methodology, Lecture Notes: Matching Methods for Causal Inference1 Gary King2 Institute

  • Upload
    others

  • View
    31

  • Download
    1

Embed Size (px)

Citation preview

  • Advanced Quantitative Research Methodology,Lecture Notes:

    Matching Methods for Causal Inference1

    Gary King2

    Institute for Quantitative Social ScienceHarvard University

    1 c©Copyright 2015 Gary King, All Rights Reserved.2GaryKing.org.

    1 / 44

  • Matching Overview

    • Current practice:

    “Matching As Nonparametric Preprocessing For Re-ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory.

    So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice:

    “Matching As Nonparametric Preprocessing For Re-ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory.

    So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory.

    So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory.

    So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory. So let’s changethe theory:

    “A Theory of Statistical Inference for Matching Meth-ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory. So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory. So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical:

    “Why Propensity Scores Should Not Be Used forMatching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory. So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical: “Why Propensity Scores Should Not Be Used for

    Matching” (Gary King, Richard Nielsen)

    • Matching methods optimize either imbalance (≈ bias) or # unitspruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory. So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical: “Why Propensity Scores Should Not Be Used for

    Matching” (Gary King, Richard Nielsen)• Matching methods optimize either imbalance (≈ bias) or # units

    pruned (≈ variance); users need both simultaneously’:

    “The Balance-Sample Size Frontier in MatchingMethods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Matching Overview• Current practice: “Matching As Nonparametric Preprocessing For Re-

    ducing Model Dependence In Parametric Causal Infer-ence” (Daniel Ho, Kosuke Imai, Gary King, ElizabethStuart)

    • Current practice violates current statistical theory. So let’s changethe theory: “A Theory of Statistical Inference for Matching Meth-

    ods in Applied Causal Research”(Stefano Iacus, Gary King, Giuseppe Porro)

    • The most popular method (propensity score matching, used in49,600 articles!) sounds magical: “Why Propensity Scores Should Not Be Used for

    Matching” (Gary King, Richard Nielsen)• Matching methods optimize either imbalance (≈ bias) or # units

    pruned (≈ variance); users need both simultaneously’: “The Balance-Sample Size Frontier in Matching

    Methods for Causal Inference” (Gary King, Christo-pher Lucas and Richard Nielsen)

    2 / 44

  • Overview of Matching for Causal Inference

    • Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence

    • A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach

    • Makes parametric models work better rather than substitutefor them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data

    • Violates the “more data is better” principle, but that onlyapplies when you know the DGP

    • Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP

    • Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Overview of Matching for Causal Inference• Goal: reduce model dependence• A nonparametric, non-model-based approach• Makes parametric models work better rather than substitute

    for them (i.e,. matching is not an estimator; its apreprocessing method)

    • Should have been called pruning (no bias is introduced ifpruning is a function of T and X , but not Y )

    • Apply model to preprocessed (pruned) rather than raw data• Violates the “more data is better” principle, but that only

    applies when you know the DGP• Overall idea:

    • If each treated unit exactly matches a control unit w.r.t. X ,then: (1) treated and control groups are identical, (2) X is nolonger a confounder, (3) no need to worry about the functionalform (ȲT − ȲC is good enough).

    • If treated and control groups are better balanced than whenyou started, due to pruning, model dependence is reduced

    3 / 44

  • Model Dependence: A Simpler Example

    What to do?

    • Preprocess I: Eliminate extrapolation region• Preprocess II: Match (prune) within interpolation region• Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

    What to do?

    • Preprocess I: Eliminate extrapolation region• Preprocess II: Match (prune) within interpolation region• Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

    What to do?

    • Preprocess I: Eliminate extrapolation region• Preprocess II: Match (prune) within interpolation region• Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

    What to do?

    • Preprocess I: Eliminate extrapolation region• Preprocess II: Match (prune) within interpolation region• Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

    What to do?

    • Preprocess I: Eliminate extrapolation region

    • Preprocess II: Match (prune) within interpolation region• Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

    What to do?

    • Preprocess I: Eliminate extrapolation region• Preprocess II: Match (prune) within interpolation region

    • Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

    What to do?

    • Preprocess I: Eliminate extrapolation region• Preprocess II: Match (prune) within interpolation region• Model remaining imbalance (as you would w/o matching)

    4 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.

    • Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points

    2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.

    4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Remove Extrapolation Region, then Match

    • Must remove data (selecting on X ) to avoid extrapolation.• Options to find “common support” of p(X |T = 1) andP(X |T = 0)

    1. Exact match, so support is defined only at data points2. Less but still conservative: convex hull approach

    • let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗,X ∗}falls within the convex hull of {T ,X}

    • use X ∗ as estimate of common support (deleting remainingobservations)

    3. Other approaches, based on distance metrics, pscores, etc.4. Easiest: Coarsened Exact Matching, no separate step needed

    5 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    T

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    T

    CC

    C

    CC

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CC C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CCC

    CC

    CC

    C

    C

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    T

    CC

    C

    CC

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CC C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CCC

    CC

    CC

    C

    C

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    T

    CC

    C

    CC

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CC C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CCC

    CC

    CC

    C

    C

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    T

    CC

    C

    CC

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CC C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    C

    CCC

    CC

    CC

    C

    C

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    TC

    C

    C

    C

    C

    CC

    C

    C

    CC

    C CC

    C

    C

    CCCC

    C

    CC

    C

    CC

    CC

    CC

    C

    C

    C

    C

    CC

    CCCC

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Education (years)

    Out

    com

    e

    12 14 16 18 20 22 24 26 28

    0

    2

    4

    6

    8

    10

    12

    T

    T

    T

    T T

    T

    T

    TTT

    TT

    T TT T

    T

    T

    T

    TC

    C

    C

    C

    C

    CC

    C

    C

    CC

    C CC

    C

    C

    CCCC

    C

    CC

    C

    CC

    CC

    CC

    C

    C

    C

    C

    CC

    CCCC

    6 / 44

  • Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

    Matching reduces model dependence, bias, and variance

    6 / 44

  • Empirical Illustration: Carpenter, AJPS, 2002

    • Hypothesis: Democratic senate majorities slow FDA drugapproval time

    • n = 408 new drugs (262 approved, 146 pending).• lognormal survival model.• seven oversight variables (median adjusted ADA scores for

    House and Senate Committees as well as for House andSenate floors, Democratic Majority in House and Senate, andDemocratic Presidency).

    • 18 control variables (clinical factors, firm characteristics,media variables, etc.)

    7 / 44

  • Empirical Illustration: Carpenter, AJPS, 2002

    • Hypothesis: Democratic senate majorities slow FDA drugapproval time

    • n = 408 new drugs (262 approved, 146 pending).• lognormal survival model.• seven oversight variables (median adjusted ADA scores for

    House and Senate Committees as well as for House andSenate floors, Democratic Majority in House and Senate, andDemocratic Presidency).

    • 18 control variables (clinical factors, firm characteristics,media variables, etc.)

    7 / 44

  • Empirical Illustration: Carpenter, AJPS, 2002

    • Hypothesis: Democratic senate majorities slow FDA drugapproval time

    • n = 408 new drugs (262 approved, 146 pending).

    • lognormal survival model.• seven oversight variables (median adjusted ADA scores for

    House and Senate Committees as well as for House andSenate floors, Democratic Majority in House and Senate, andDemocratic Presidency).

    • 18 control variables (clinical factors, firm characteristics,media variables, etc.)

    7 / 44

  • Empirical Illustration: Carpenter, AJPS, 2002

    • Hypothesis: Democratic senate majorities slow FDA drugapproval time

    • n = 408 new drugs (262 approved, 146 pending).• lognormal survival model.

    • seven oversight variables (median adjusted ADA scores forHouse and Senate Committees as well as for House andSenate floors, Democratic Majority in House and Senate, andDemocratic Presidency).

    • 18 control variables (clinical factors, firm characteristics,media variables, etc.)

    7 / 44

  • Empirical Illustration: Carpenter, AJPS, 2002

    • Hypothesis: Democratic senate majorities slow FDA drugapproval time

    • n = 408 new drugs (262 approved, 146 pending).• lognormal survival model.• seven oversight variables (median adjusted ADA scores for

    House and Senate Committees as well as for House andSenate floors, Democratic Majority in House and Senate, andDemocratic Presidency).

    • 18 control variables (clinical factors, firm characteristics,media variables, etc.)

    7 / 44

  • Empirical Illustration: Carpenter, AJPS, 2002

    • Hypothesis: Democratic senate majorities slow FDA drugapproval time

    • n = 408 new drugs (262 approved, 146 pending).• lognormal survival model.• seven oversight variables (median adjusted ADA scores for

    House and Senate Committees as well as for House andSenate floors, Democratic Majority in House and Senate, andDemocratic Presidency).

    • 18 control variables (clinical factors, firm characteristics,media variables, etc.)

    7 / 44

  • Evaluating Reduction in Model Dependence

    • Focus on the causal effect of a Democratic majority in theSenate (identified by Carpenter as not robust).

    • Match: prune 49 units (2 treated, 17 control units).• run 262,143 possible specifications and calculates ATE for

    each.

    • Look at variability in ATE estimate across specifications.• (Normal applications would only use one or a few

    specifications.)

    8 / 44

  • Evaluating Reduction in Model Dependence

    • Focus on the causal effect of a Democratic majority in theSenate (identified by Carpenter as not robust).

    • Match: prune 49 units (2 treated, 17 control units).• run 262,143 possible specifications and calculates ATE for

    each.

    • Look at variability in ATE estimate across specifications.• (Normal applications would only use one or a few

    specifications.)

    8 / 44

  • Evaluating Reduction in Model Dependence

    • Focus on the causal effect of a Democratic majority in theSenate (identified by Carpenter as not robust).

    • Match: prune 49 units (2 treated, 17 control units).

    • run 262,143 possible specifications and calculates ATE foreach.

    • Look at variability in ATE estimate across specifications.• (Normal applications would only use one or a few

    specifications.)

    8 / 44

  • Evaluating Reduction in Model Dependence

    • Focus on the causal effect of a Democratic majority in theSenate (identified by Carpenter as not robust).

    • Match: prune 49 units (2 treated, 17 control units).• run 262,143 possible specifications and calculates ATE for

    each.

    • Look at variability in ATE estimate across specifications.• (Normal applications would only use one or a few

    specifications.)

    8 / 44

  • Evaluating Reduction in Model Dependence

    • Focus on the causal effect of a Democratic majority in theSenate (identified by Carpenter as not robust).

    • Match: prune 49 units (2 treated, 17 control units).• run 262,143 possible specifications and calculates ATE for

    each.

    • Look at variability in ATE estimate across specifications.

    • (Normal applications would only use one or a fewspecifications.)

    8 / 44

  • Evaluating Reduction in Model Dependence

    • Focus on the causal effect of a Democratic majority in theSenate (identified by Carpenter as not robust).

    • Match: prune 49 units (2 treated, 17 control units).• run 262,143 possible specifications and calculates ATE for

    each.

    • Look at variability in ATE estimate across specifications.• (Normal applications would only use one or a few

    specifications.)

    8 / 44

  • Reducing Model Dependence

    −80 −70 −60 −50 −40 −30

    0.00

    0.05

    0.10

    0.15

    0.20

    Estimated in−sample average treatment effect for the treated

    Den

    sity

    Raw data Matcheddata

    Point estimate of Carpenter's specification

    using raw data

    Figure: SATT Histogram: Effect of Democratic Senate majority on FDAdrug approval time, across 262, 143 specifications.

    9 / 44

  • Another Example: Jeffrey Koch, AJPS, 2002

    −0.05 0.00 0.05 0.10

    010

    2030

    4050

    60

    Estimated average treatment effect

    Den

    sity

    Raw data

    Matcheddata

    Point estimate of raw data

    Figure: SATT Histogram: Effect of being a highly visible femaleRepublican candidate across 63 possible specifications with the Kochdata.

    10 / 44

  • The Advantage of Matching

    Without Matching:

    11 / 44

  • The Advantage of Matching

    Without Matching:

    Imbalance

    11 / 44

  • The Advantage of Matching

    Without Matching:

    Imbalance Model Dependence

    11 / 44

  • The Advantage of Matching

    Without Matching:

    Imbalance Model Dependence Researcher discretion

    11 / 44

  • The Advantage of Matching

    Without Matching:

    Imbalance Model Dependence Researcher discretion Bias

    11 / 44

  • The Advantage of Matching

    With��HHout Matching:

    ��ZZImbalance Model Dependence Researcher discretion Bias

    11 / 44

  • The Advantage of Matching

    With��HHout Matching:

    ��ZZImbalance (((((((

    ((hhhhhhhhhModel Dependence Researcher discretion Bias

    11 / 44

  • The Advantage of Matching

    With��HHout Matching:

    ��ZZImbalance (((((((

    ((hhhhhhhhhModel Dependence ((((((((

    ((hhhhhhhhhhResearcher discretion Bias

    11 / 44

  • The Advantage of Matching

    With��HHout Matching:

    ��ZZImbalance (((((((

    ((hhhhhhhhhModel Dependence ((((((((

    ((hhhhhhhhhhResearcher discretion ���XXXBias

    11 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders

    • Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi (1)− Yi (0)

    = observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi (1)− Yi (0)= observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:

    1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control

    • Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence

    • Big convenience: Follow preprocessing with whateverstatistical method you’d have used without matching

    12 / 44

  • Current Practice: Matching as Preprocessing

    • Yi dep var, Ti (1=treated, 0=control), Xi confounders• Treatment Effect for treated observation i :

    TEi = Yi − Yi (0)= observed− unobserved

    • Quantities of Interest:1. SATT: Sample Average Treatment effect on the Treated:

    SATT = meani∈{Ti=1} (TEi )

    2. FSATT: Feasible Average Treatment effect on the Treated

    • Estimate Yi (0) with Yj from matched (Xi ≈ Xj) control• Prune nonmatches: reduces imbalance & model dependence• Big convenience: Follow preprocessing with whatever

    statistical method you’d have used without matching

    12 / 44

  • Approximating Randomized Experiments

    • Types of experiments:

    1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:

    1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average

    Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample

    Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization

    for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:

    imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance,

    model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence,

    power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power,

    efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency,

    bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,

    research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs,

    and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?

    • PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?• PSM: complete randomization

    • Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?• PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?• PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM

    (but wait, it gets worse for PSM)

    13 / 44

  • Approximating Randomized Experiments

    • Types of experiments:1. Compete Randomization: Treatment assignment by coin flips

    Balance on X : only on average Balance on unmeasured vars: only on average

    2. Fully Blocked: Match pairs on X (exactly), then flip coins

    Balance on X : perfect in sample Balance on unmeasured vars: only on average

    • Fully blocked dominates complete randomization for:imbalance, model dependence, power, efficiency, bias,research costs, and robustness.

    • Matching methods approximate which experiment?• PSM: complete randomization• Other methods: fully blocked

    • =⇒ As we show, other methods usually dominate PSM(but wait, it gets worse for PSM)

    13 / 44

  • Method 1: Mahalanobis Distance Matching

    1. Preprocess (Matching)

    • Distance(Xi ,Xj) =√

    (Xi − Xj)′S−1(Xi − Xj)• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)

    • Distance(Xi ,Xj) =√

    (Xi − Xj)′S−1(Xi − Xj)• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)

    • Distance(Xi ,Xj) =√

    (Xi − Xj)′S−1(Xi − Xj)• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Distance(Xi ,Xj) =

    √(Xi − Xj)′S−1(Xi − Xj)

    • Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Distance(Xi ,Xj) =

    √(Xi − Xj)′S−1(Xi − Xj)

    • Match each treated unit to the nearest control unit

    • Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Distance(Xi ,Xj) =

    √(Xi − Xj)′S−1(Xi − Xj)

    • Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused

    • Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Distance(Xi ,Xj) =

    √(Xi − Xj)′S−1(Xi − Xj)

    • Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Method 1: Mahalanobis Distance Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Distance(Xi ,Xj) =

    √(Xi − Xj)′S−1(Xi − Xj)

    • Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model

    3. Checking Measure imbalance, tweak, repeat, . . .

    14 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    15 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    TTTT

    T

    T

    T

    T

    T

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    15 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    C

    C

    CC

    C

    C

    C

    C

    C

    CC

    C

    CCC

    CC

    C

    C

    C

    CC CC

    C

    C

    CC

    C

    CC

    CC

    C

    C C

    CC

    C

    C

    TTTT

    T

    T

    T

    T

    T

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    15 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    C

    C

    CC

    C

    C

    C

    C

    C

    CC

    C

    CCC

    CC

    C

    C

    C

    CC CC

    C

    C

    CC

    C

    CC

    CC

    C

    C C

    CC

    C

    C

    TTTT

    T

    T

    T

    T

    T

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    15 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    T TT T

    TTTT

    T TTTT

    T TT

    TTTT

    CCC C

    CC

    C

    C

    C CC

    C

    CC

    CCC CC

    C

    C

    CCCCC

    CCC CCCCC

    C CCCC

    C

    15 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    T TT T

    TTTT

    T TTTT

    T TT

    TTTT

    CCC C

    CC

    C

    C

    C CC

    C

    CC

    CCC CC

    C

    15 / 44

  • Mahalanobis Distance Matching

    Education (years)

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    T TT T

    TTTT

    T TTTT

    T TT

    TTTT

    CCC C

    CC

    C

    C

    C CC

    C

    CC

    CCC CC

    C

    15 / 44

  • Method 2: Coarsened Exact Matching

    1. Preprocess (Matching)

    • Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)

    • Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)

    • Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)

    • Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)

    • Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)

    • Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)

    • Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )

    • Prune any stratum with 0 treated or 0 control units• Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model

    • Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model• Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .

    • Easier, but still iterative

    16 / 44

  • Method 2: Coarsened Exact Matching(Approximates Fully Blocked Experiment)

    1. Preprocess (Matching)• Temporarily coarsen X as much as you’re willing

    • e.g., Education (grade school, high school, college, graduate)• Apply exact matching to the coarsened X , C (X )

    • Sort observations into strata, each with unique values of C(X )• Prune any stratum with 0 treated or 0 control units

    • Pass on original (uncoarsened) units except those pruned

    2. Estimation Difference in means or a model• Need to weight controls in each stratum to equal treateds

    3. Checking Determine matched sample size, tweak, repeat, . . .• Easier, but still iterative

    16 / 44

  • Coarsened Exact Matching

    17 / 44

  • Coarsened Exact Matching

    Education

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    CCC C

    CC CC

    C CC C CCC CCC

    CCCC CC CC

    C CCCCCC

    C CCC CC

    C

    T TT T

    TTTT

    T TTTT

    T TT

    TTTT

    17 / 44

  • Coarsened Exact Matching

    Education

    HS BA MA PhD 2nd PhD

    Drinking age

    Don't trust anyoneover 30

    The Big 40

    Senior Discounts

    Retirement

    Old

    CCC C

    CC CC

    C CC C CCC CCC

    CCCC CC CC

    C CCCCCC

    C CCC CC

    C

    T TT T

    TTTT

    T TTTT

    T TT

    TTTT

    17 / 44

  • Coarsened Exact Matching

    Education

    HS BA MA PhD 2nd PhD

    Drinking age

    Don't trust anyoneover 30

    The Big 40

    Senior Discounts

    Retirement

    Old

    CCC C

    CC CC

    C CC C CCC CCC

    CCCC CC CC

    C CCCCCC

    C CCC CC

    C

    T TT T

    TTTT

    T TTTT

    T TT

    TTTT

    17 / 44

  • Coarsened Exact Matching

    Education

    HS BA MA PhD 2nd PhD

    Drinking age

    Don't trust anyoneover 30

    The Big 40

    Senior Discounts

    Retirement

    Old

    CC C

    CC

    CC CCCC CC C

    CCCC

    TTT T T

    TTT

    T TT

    TTTT

    17 / 44

  • Coarsened Exact Matching

    Education

    HS BA MA PhD 2nd PhD

    Drinking age

    Don't trust anyoneover 30

    The Big 40

    Senior Discounts

    Retirement

    Old

    CC C

    CCCC C CC

    C CC CCC

    C

    C

    TTT T T

    TTT

    T TT

    TTTT

    17 / 44

  • Coarsened Exact Matching

    Education

    Age

    12 14 16 18 20 22 24 26 28

    20

    30

    40

    50

    60

    70

    80

    CC C

    CCCC C CC

    C CC CCC

    C

    C

    TTT T T

    TTT

    T TT

    TTTT

    17 / 44

  • Method 3: Propensity Score Matching

    1. Preprocess (Matching)

    • Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)

    • Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)

    • Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)• Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)• Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |

    • Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)• Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit

    • Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)• Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused

    • Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)• Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Method 3: Propensity Score Matching(Approximates Completely Randomized Experiment)

    1. Preprocess (Matching)• Reduce k elements of X to scalarπi ≡ Pr(Ti = 1|X ) = 11+e−Xiβ

    • Distance(Xi ,Xj) = |πi − πj |• Match each treated unit to the nearest control unit• Control units: not reused; pruned if unused• Prune matches if Distance>caliper

    2. Estimation Difference in means or a model3. Checking Measure imbalance, tweak, repeat, . . .

    18 / 44

  • Propensity Score Matching

    Education (years)

    Age

    12 16 20 24 28

    20

    30

    40

    50

    60

    70

    80

    C

    C

    CC

    C

    C

    C

    C

    C

    CC

    C

    CCC

    CC

    C

    C

    C

    CCCC

    C

    C

    CC

    C

    CC

    CC

    C

    C C

    CC

    C

    C

    TTTT

    T

    T

    T

    T

    T

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    19 / 44

  • Propensity Score Matching

    Education (years)

    Age

    12 16 20 24 28

    20

    30

    40

    50

    60

    70

    80

    C

    C

    CC

    C

    C

    C

    C

    C

    CC

    C

    CCC

    CC

    C

    C

    C

    CCCC

    C

    C

    CC

    C

    CC

    CC

    C

    C C

    CC

    C

    C

    TTTT

    T

    T

    T

    T

    T

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    1

    0

    PropensityScore 19 / 44

  • Propensity Score Matching

    Education (years)

    Age

    12 16 20 24 28

    20

    30

    40

    50

    60

    70

    80

    C

    C

    CC

    C

    C

    C

    C

    C

    CC

    C

    CCC

    CC

    C

    C

    C

    CCCC

    C

    C

    CC

    C

    CC

    CC

    C

    C C

    CC

    C

    C

    TTTT

    T

    T

    T

    T

    T

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    1

    0

    PropensityScore

    C

    C

    CC

    CCC

    C

    C

    C

    CC

    C

    C

    C

    C

    C

    C

    C

    CCCCC

    C

    CCCCCCCCC

    C

    C

    C

    C

    CC

    T

    TTT

    T

    TT

    T

    T

    T

    T

    TT

    T

    T

    T

    T

    T

    T

    T

    19