If you can't read please download the document

View

28Download

0

Tags:

Embed Size (px)

DESCRIPTION

Inductive Amnesia. The Reliability of Iterated Belief Revision. EvenOdd StraightCrooked Reliability“Confirmation” Performance “Primitive norms” Correctness“Coherence” Classical statistics Bayesianism Learning theoryBelief Revision Theory. A Table of Opposites. - PowerPoint PPT Presentation

Inductive AmnesiaThe Reliability of Iterated Belief Revision

A Table of OppositesEvenOddStraightCrookedReliabilityConfirmation Performance Primitive normsCorrectnessCoherenceClassical statistics BayesianismLearning theoryBelief Revision Theory

The IdeaBelief revision is inductive reasoningA restrictive norm prevents us from finding truths we could have found by other meansSome proposed belief revision methods are restrictiveThe restrictiveness is expressed as inductive amnesia

Inductive AmnesiaNo restriction on memory...No restriction on predictive power...But prediction causes memory loss...And perfect memory precludes prediction!Fundamental dilemma

OutlineI. Seven belief revision methodsII. Belief revision as learningIII. Properties of the methodsIV. The Goodman hierarchyV. Negative resultsVI. Positive resultsVII. Discussion

Points of InterestStrong negative and positive resultsShort run advice from limiting analysis2 is magic for reliable belief revisionLearning as cube rotationGrue

Part IIterated Belief Revision

Bayesian (Vanilla) UpdatingB Propositions are sets of possible worlds

Bayesian (Vanilla) UpdatingBE newevidence

Bayesian (Vanilla) UpdatingPerfect memoryNo inductive leapsBBE B = B *E = B E

Epistemic HellB

Epistemic HellBE Epistemic hellSurprise!

Epistemic HellScientific revolutionsSuppositional reasoningConditional pragmaticsDecision theoryGame theoryData basesBE Epistemic hell

Ordinal EntrenchmentSpohn 88Epistemic state S maps worlds to ordinalsBelief state of S = b (S ) = S -1(0)Determines centrality of beliefsModel: orders of infinitesimal probabilitySB = b (S)210ww + 1

Belief Revision Methods*SSE * takes an epistemic state and a proposition to an epistemic stateb(S)b (S *E )

Spohn Conditioning *CSpohn 88b (S )S

Spohn Conditioning *CSpohn 88ESnewevidencecontradicting b (S ) b (S )

Spohn Conditioning *CSpohn 88ES*Cb (S )S *C E

Spohn Conditioning *CSpohn 88Conditions an entire entrenchment orderingPerfect memoryInductive leapsNo epistemic hell on consistent sequencesEpistemic hell on inconsistent sequencesESS *C E *Cb (S )

Lexicographic Updating *LSpohn 88, Nayak 94S

Lexicographic Updating *LSpohn 88, Nayak 94S

Lexicographic Updating *LSpohn 88, Nayak 94Lift refuted possibilities above non-refuted possibilities preserving order.Perfect memory on consistent sequencesInductive leapsNo epistemic hellS S *L E *L

Minimal or Natural Updating *MSpohn 88, Boutilier 93BS

Minimal or Natural Updating *MSpohn 88, Boutilier 93BE S

Minimal or Natural Updating *MSpohn 88, Boutilier 93Drop the lowest possibilities consistent with the data to the bottom and raise everything else up one notchinductive leapsNo epistemic hellBut...E SS *M E *M

AmnesiaWhat goes up can come downBelief no longer entails past dataE

AmnesiaWhat goes up can come downBelief no longer entails past dataE E*M

AmnesiaWhat goes up can come downBelief no longer entails past dataE E*M*M

The Flush-to-a Method *F,a Goldszmidt and Pearl 94BS

The Flush-to-a Method *F,a Goldszmidt and Pearl 94a ES E

The Flush-to-a Method *F,a Goldszmidt and Pearl 94Send non-E worlds to a fixed level a and drop E -worlds rigidly to the bottomPerfect memory on sequentially consistent data if a is high enoughInductive leapsNo epistemic hellES S *F,a E a E*F,a

Ordinal Jeffrey Conditioning *J,a Spohn 88 S

Ordinal Jeffrey Conditioning *J,a Spohn 88 ES EE

Ordinal Jeffrey Conditioning *J,a Spohn 88 ES EE a

Ordinal Jeffrey Conditioning *J,a Spohn 88 Drop E worlds to the bottom. Drop non-E worlds to the bottom and then jack them up to level aPerfect memory on consistent sequences if a is large enoughNo epistemic hellReversibleBut...BES S *J,a E BEE a *J,a

Empirical Backsliding

Empirical BackslidingE a

Empirical BackslidingOrdinal Jeffrey conditioning can increase the plausibility of a refuted possibilityE a

The Ratchet Method *R,a Darwiche and Pearl 97S

The Ratchet Method *R,a Darwiche and Pearl 97S E b + a b

The Ratchet Method *R,a Darwiche and Pearl 97Like ordinal Jeffrey conditioning except refuted possibilities move up by a from their current positionsPerfect memory if a is large enoughInductive leapsNo epistemic hellS S *R,a E E Bb + a Bb*R,a

Part IIBelief Revision as Learning

Iterated Belief Revision*S0S1S2b (S2)S0(S0 * ()) = S0(S0 * (E0, ..., En, En+1)) = (S0 * (E0, ..., En, )) * En+1E0E1b (S1)b (S0)

A Very Simple Learning Paradigmmysterious systemoutcomesequence00100ee|npossible infinite trajectoriesn

Empirical Propositions[s] = the propositionthat s has occurred[k, n] = theproposition thatk occurs at stage nsnEmpirical propositions are sets of possible trajectoriesSome special cases:e{e} = the proposition that the future trajectory is exactly efank

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}e possible trajectories

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 0)e

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 1)

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 2)

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 3)

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}convergence to {e }b (S 4)

Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 5)etc...

ReliabilityLet K be a set of possible outcome trajectories(*, S0) identifies K (*, S0) identifies each e in K

Identifiability CharacterizedProposition: K is identifiable just in case K is countable

Completeness and Restrictiveness* is complete each ientifiable K is identifiable by (*, S0), for some choice of S0.Else * is restrictive.

Part IIIProperties of the Methods

Timidity and Stubbornnesstimidity: no inductive leaps without refutationstubbornness: no retractions without refutation BB

Timidity and Stubbornnesstimidity: no inductive leaps without refutationstubbornness: no retractions without refutation BB

Timidity and StubbornnessBelief is Bayesian in the non-problematic caseAll the proposed methods are timid and stubbornVestige of the dogma that probability rules induction BB

Local ConsistencyLocal consistency: The updated belief must always be consistent with the current datumAll the methods under consideration are designed to be locally consistent

Timidity and Stubbornnesstimidity: no inductive leaps without refutationstubbornness: no retractions without refutation BB

Positive Order-invariancePositive order-invariance: ranking among worlds satisfying all the data so far are preservedAll the methods considered are positively order-invariant

Data-RetentivenessData-retentiveness: Each world satisfying all the data is placed above each world failing to satisfy some datumData-retentiveness is sufficient but not necessary for perfect memory*C, *L are data-retentive*R,a, *J,a are data-retentive if a is above the top of S. S

Enumerate and TestA method enumerates and tests just in case it is: locally consistent, positively order-invariant, data-retentiveEnumerate and test methods: *C, *LThe methods with parameter a if a is above the top of S 0. preserved entrenchmentordering on live possibilities epistemicdump forrefutedpossibilities

CompletenessProposition: If * enumerates and tests, then * is completeProof: Let S0 be an enumeration of K

CompletenessProposition: If * enumerates and tests, then * is completeProof: Let S0 be an enumeration of KLet e be in K e

CompletenessFeed successive data along e:[0, e(0)], [1, e(1)], ..., [n, e(n)], ... [0, e (0)]e

Completeness[0, e (0)] elocal consistencypositive invariancedata retentiveness e

Completeness[0, e (0)] e e[1, e (1)]

Completeness[0, e (0)] e e e[1, e (1)]local consistencypositive invariancedata retentiveness

Completeness[0, e (0)] e e e[1, e (1)][2, e (2)]

Completeness[0, e (0)] e e e[1, e (1)][2, e (2)] elocal consistpositive invardata retentivenessConvergence

QuestionWhat about the methods that arent data retentive?Are they complete?If not, can they be objectively compared?

Part IV:The Goodman Hierarchy

The Grue OperationNelson GoodmanA way to generate inductive problems of ever higher difficulty e n = (e|n)(n|e)ne

nke neThe Grue OperationNelson GoodmanA way to generate inductive problems of ever higher difficulty e n = (e|n)(n|e)

The Grue OperationNelson GoodmanA way to generate inductive problems of