Inductive Amnesia

  • Published on
    06-Jan-2016

  • View
    25

  • Download
    0

Embed Size (px)

DESCRIPTION

Inductive Amnesia. The Reliability of Iterated Belief Revision. EvenOdd StraightCrooked ReliabilityConfirmation Performance Primitive norms CorrectnessCoherence Classical statistics Bayesianism Learning theoryBelief Revision Theory. A Table of Opposites. - PowerPoint PPT Presentation

Transcript

<ul><li><p>Inductive AmnesiaThe Reliability of Iterated Belief Revision</p></li><li><p>A Table of OppositesEvenOddStraightCrookedReliabilityConfirmation Performance Primitive normsCorrectnessCoherenceClassical statistics BayesianismLearning theoryBelief Revision Theory</p></li><li><p>The IdeaBelief revision is inductive reasoningA restrictive norm prevents us from finding truths we could have found by other meansSome proposed belief revision methods are restrictiveThe restrictiveness is expressed as inductive amnesia</p></li><li><p>Inductive AmnesiaNo restriction on memory...No restriction on predictive power...But prediction causes memory loss...And perfect memory precludes prediction!Fundamental dilemma</p></li><li><p>OutlineI. Seven belief revision methodsII. Belief revision as learningIII. Properties of the methodsIV. The Goodman hierarchyV. Negative resultsVI. Positive resultsVII. Discussion</p></li><li><p>Points of InterestStrong negative and positive resultsShort run advice from limiting analysis2 is magic for reliable belief revisionLearning as cube rotationGrue</p></li><li><p>Part IIterated Belief Revision</p></li><li><p>Bayesian (Vanilla) UpdatingB Propositions are sets of possible worlds</p></li><li><p>Bayesian (Vanilla) UpdatingBE newevidence</p></li><li><p>Bayesian (Vanilla) UpdatingPerfect memoryNo inductive leapsBBE B = B *E = B E </p></li><li><p>Epistemic HellB</p></li><li><p>Epistemic HellBE Epistemic hellSurprise!</p></li><li><p>Epistemic HellScientific revolutionsSuppositional reasoningConditional pragmaticsDecision theoryGame theoryData basesBE Epistemic hell</p></li><li><p>Ordinal EntrenchmentSpohn 88Epistemic state S maps worlds to ordinalsBelief state of S = b (S ) = S -1(0)Determines centrality of beliefsModel: orders of infinitesimal probabilitySB = b (S)210ww + 1</p></li><li><p>Belief Revision Methods*SSE * takes an epistemic state and a proposition to an epistemic stateb(S)b (S *E )</p></li><li><p>Spohn Conditioning *CSpohn 88b (S )S</p></li><li><p>Spohn Conditioning *CSpohn 88ESnewevidencecontradicting b (S ) b (S )</p></li><li><p>Spohn Conditioning *CSpohn 88ES*Cb (S )S *C E </p></li><li><p>Spohn Conditioning *CSpohn 88Conditions an entire entrenchment orderingPerfect memoryInductive leapsNo epistemic hell on consistent sequencesEpistemic hell on inconsistent sequencesESS *C E *Cb (S )</p></li><li><p>Lexicographic Updating *LSpohn 88, Nayak 94S </p></li><li><p>Lexicographic Updating *LSpohn 88, Nayak 94S </p></li><li><p>Lexicographic Updating *LSpohn 88, Nayak 94Lift refuted possibilities above non-refuted possibilities preserving order.Perfect memory on consistent sequencesInductive leapsNo epistemic hellS S *L E *L</p></li><li><p>Minimal or Natural Updating *MSpohn 88, Boutilier 93BS</p></li><li><p>Minimal or Natural Updating *MSpohn 88, Boutilier 93BE S</p></li><li><p>Minimal or Natural Updating *MSpohn 88, Boutilier 93Drop the lowest possibilities consistent with the data to the bottom and raise everything else up one notchinductive leapsNo epistemic hellBut...E SS *M E *M</p></li><li><p>AmnesiaWhat goes up can come downBelief no longer entails past dataE </p></li><li><p>AmnesiaWhat goes up can come downBelief no longer entails past dataE E*M</p></li><li><p>AmnesiaWhat goes up can come downBelief no longer entails past dataE E*M*M</p></li><li><p>The Flush-to-a Method *F,a Goldszmidt and Pearl 94BS</p></li><li><p>The Flush-to-a Method *F,a Goldszmidt and Pearl 94a ES E</p></li><li><p>The Flush-to-a Method *F,a Goldszmidt and Pearl 94Send non-E worlds to a fixed level a and drop E -worlds rigidly to the bottomPerfect memory on sequentially consistent data if a is high enoughInductive leapsNo epistemic hellES S *F,a E a E*F,a</p></li><li><p>Ordinal Jeffrey Conditioning *J,a Spohn 88 S </p></li><li><p>Ordinal Jeffrey Conditioning *J,a Spohn 88 ES EE </p></li><li><p>Ordinal Jeffrey Conditioning *J,a Spohn 88 ES EE a </p></li><li><p>Ordinal Jeffrey Conditioning *J,a Spohn 88 Drop E worlds to the bottom. Drop non-E worlds to the bottom and then jack them up to level aPerfect memory on consistent sequences if a is large enoughNo epistemic hellReversibleBut...BES S *J,a E BEE a *J,a</p></li><li><p>Empirical Backsliding</p></li><li><p>Empirical BackslidingE a </p></li><li><p>Empirical BackslidingOrdinal Jeffrey conditioning can increase the plausibility of a refuted possibilityE a </p></li><li><p>The Ratchet Method *R,a Darwiche and Pearl 97S </p></li><li><p>The Ratchet Method *R,a Darwiche and Pearl 97S E b + a b</p></li><li><p>The Ratchet Method *R,a Darwiche and Pearl 97Like ordinal Jeffrey conditioning except refuted possibilities move up by a from their current positionsPerfect memory if a is large enoughInductive leapsNo epistemic hellS S *R,a E E Bb + a Bb*R,a</p></li><li><p>Part IIBelief Revision as Learning</p></li><li><p>Iterated Belief Revision*S0S1S2b (S2)S0(S0 * ()) = S0(S0 * (E0, ..., En, En+1)) = (S0 * (E0, ..., En, )) * En+1E0E1b (S1)b (S0)</p></li><li><p>A Very Simple Learning Paradigmmysterious systemoutcomesequence00100ee|npossible infinite trajectoriesn</p></li><li><p>Empirical Propositions[s] = the propositionthat s has occurred[k, n] = theproposition thatk occurs at stage nsnEmpirical propositions are sets of possible trajectoriesSome special cases:e{e} = the proposition that the future trajectory is exactly efank </p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}</p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}e possible trajectories</p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 0)e </p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 1)</p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 2)</p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 3)</p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}convergence to {e }b (S 4)</p></li><li><p>Trajectory Identification(*, S0) identifies e for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}b (S 5)etc...</p></li><li><p>ReliabilityLet K be a set of possible outcome trajectories(*, S0) identifies K (*, S0) identifies each e in K</p></li><li><p>Identifiability CharacterizedProposition: K is identifiable just in case K is countable</p></li><li><p>Completeness and Restrictiveness* is complete each ientifiable K is identifiable by (*, S0), for some choice of S0.Else * is restrictive. </p></li><li><p>Part IIIProperties of the Methods</p></li><li><p>Timidity and Stubbornnesstimidity: no inductive leaps without refutationstubbornness: no retractions without refutation BB</p></li><li><p>Timidity and Stubbornnesstimidity: no inductive leaps without refutationstubbornness: no retractions without refutation BB</p></li><li><p>Timidity and StubbornnessBelief is Bayesian in the non-problematic caseAll the proposed methods are timid and stubbornVestige of the dogma that probability rules induction BB</p></li><li><p>Local ConsistencyLocal consistency: The updated belief must always be consistent with the current datumAll the methods under consideration are designed to be locally consistent</p></li><li><p>Timidity and Stubbornnesstimidity: no inductive leaps without refutationstubbornness: no retractions without refutation BB</p></li><li><p>Positive Order-invariancePositive order-invariance: ranking among worlds satisfying all the data so far are preservedAll the methods considered are positively order-invariant </p></li><li><p>Data-RetentivenessData-retentiveness: Each world satisfying all the data is placed above each world failing to satisfy some datumData-retentiveness is sufficient but not necessary for perfect memory*C, *L are data-retentive*R,a, *J,a are data-retentive if a is above the top of S. S </p></li><li><p>Enumerate and TestA method enumerates and tests just in case it is: locally consistent, positively order-invariant, data-retentiveEnumerate and test methods: *C, *LThe methods with parameter a if a is above the top of S 0. preserved entrenchmentordering on live possibilities epistemicdump forrefutedpossibilities</p></li><li><p>CompletenessProposition: If * enumerates and tests, then * is completeProof: Let S0 be an enumeration of K</p></li><li><p>CompletenessProposition: If * enumerates and tests, then * is completeProof: Let S0 be an enumeration of KLet e be in K e</p></li><li><p>CompletenessFeed successive data along e:[0, e(0)], [1, e(1)], ..., [n, e(n)], ... [0, e (0)]e</p></li><li><p>Completeness[0, e (0)] elocal consistencypositive invariancedata retentiveness e</p></li><li><p>Completeness[0, e (0)] e e[1, e (1)]</p></li><li><p>Completeness[0, e (0)] e e e[1, e (1)]local consistencypositive invariancedata retentiveness</p></li><li><p>Completeness[0, e (0)] e e e[1, e (1)][2, e (2)]</p></li><li><p>Completeness[0, e (0)] e e e[1, e (1)][2, e (2)] elocal consistpositive invardata retentivenessConvergence</p></li><li><p>QuestionWhat about the methods that arent data retentive?Are they complete?If not, can they be objectively compared?</p></li><li><p>Part IV:The Goodman Hierarchy</p></li><li><p>The Grue OperationNelson GoodmanA way to generate inductive problems of ever higher difficulty e n = (e|n)(n|e)ne</p></li><li><p>nke neThe Grue OperationNelson GoodmanA way to generate inductive problems of ever higher difficulty e n = (e|n)(n|e)</p></li><li><p>The Grue OperationNelson GoodmanA way to generate inductive problems of ever higher difficulty e n = (e|n)(n|e)nmke ne(e n) m</p></li><li><p>The Grue OperationNelson GoodmanA way to generate inductive problems of ever higher difficulty e n = (e|n)(n|e)e((e n) m) k nmke n(e n) m</p></li><li><p>The Goodman HierarchyGn(e) = the set of all trajectories you can get by gruing e up to n positionsGneven(e) = the set of all trajectories you can get by gruing e an even number of distinct positions up to 2nG0(e)G1(e)G2(e)G3(e)G0even(e)G1even (e)nmk</p></li><li><p>The Goodman LimitGw(e) = n Gn(e)Gweven(e) = n Gneven (e)Proposition: Gweven(e) = the set of all finite variants of e </p></li><li><p>The Goodman SpectrumG0(e)G1(e)G2(e)G3(e)Gw(e)MinFlushJeffreyRatchLexConda = 2a = 2 a = 2yes yes yes yes yes yes yes yes a = 2 a = 2 nonononoa = wa = 2a = 2 a = n +1a = 3a = 2yes yes a = 0a = 0a = 0yes a = 1</p></li><li><p>The Even Goodman SpectrumG0even (e)G1even (e)G2even (e)Gneven (e)Gweven (e)a = 1MinFlushJeffreyRatchLexConda = 0a = 2a = 3a = n +1a = 0a = 0a = 1a = 1 a = 1a = 1yes yes yes yes yes yes yes yes yes yes a = 1 a = 1 yes nonononoa = wa = 1</p></li><li><p>Part V:Negative Results</p></li><li><p>Epistemic Dualitytabula rasaBayesian</p><p>conjectures and refutationsPopperian</p></li><li><p>Epistemic Extremes. . .</p><p>perfect memoryno projections</p><p>projects the futuremay forget*J,2a = 2</p></li><li><p>Opposing Epistemic Pressures</p></li><li><p>Opposing Epistemic Pressuresrarefaction for inductive leaps</p></li><li><p>Opposing Epistemic PressuresIdentification requires bothIs there a critical value of a for which they can be balanced for a given problem K?</p><p>compression for memoryrarefaction for inductive leaps</p></li><li><p>Methods *S,1; *M Fail on G1(e)G0(e)G1(e)G2(e)G3(e)Gw(e)MinFlushJeffreyRatchLexConda = 2a = 2 a = 2yes yes yes yes yes yes yes yes a = 2 a = 2 nonononoa = wa = 2 a = n +1a = 3a = 2yes yes a = 0a = 0a = 0yes a = 1a = 2</p></li><li><p>Methods *S,1; *M Fail on G1(e)Proof: Suppose otherwiseFeed e until e is uniquely at the bottom</p></li><li><p>Methods *S,1; *M Fail on G1(e)Proof: Suppose otherwiseFeed e until e is uniquely at the bottome data so far?</p></li><li><p>Methods *S,1; *M Fail on G1(e)By the well-ordering condition, e data so farelse...?</p></li><li><p>Methods *S,1; *M Fail on G1(e)Now feed e foreverBy stage n, the picture is the samee ?e e e n positive order invariancetimidity and stubbornness</p></li><li><p>Methods *S,1; *M Fail on G1(e)At stage n +1, e stays at the bottom (timid and stubborn).So e cant travel down (definitions of the rules)e doesnt rise (definitions of the rules)Now e makes it to the bottom at least as soon as ee ?e e e n </p></li><li><p>Method *R,1 Fails on G2(e)G0(e)G1(e)G2(e)G3(e)Gw(e)MinFlushJeffreyRatchLexConda = 2a = 2 a = 2yes yes yes yes yes yes yes yes a = 2 a = 2 nonononoa = wa = 2a = 2 a = n +1a = 3a = 2yes yes a = 0a = 0a = 0yes a = 1</p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteProof: Suppose otherwiseBring e uniquely to the bottom, say at stage kek </p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteStart feeding a = e k ek a </p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteBy some stage k, a is uniquely downSo between k + 1 and k, there is a first stage j when no finite variant of e is at the bottomak k a </p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteLet c in G2(e) be a finite variant of e that rises to level 1 at jk a k j c </p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteLet c in G2(e) be a finite variant of e that rises to level 1 at jk a k j c </p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteSo c(j - 1) a(j - 1)k a k j c </p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteLet d be a up to j and e thereafterSo is in G2(e)Since d differs from e, d is at least as high as level 1 at j k a k j c d 1</p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteShow: c agrees with e after j.k a k j c d 1</p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteCase: j = k+1Then c could have been chosen as e since e is uniquely at the bottom at kk a k j c d 1</p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteCase: j &gt; k+1Then c wouldnt have been at the bottom if it hadnt agreed with a (disagreed with e)k a k j c d 1</p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteCase: j &gt; k+1So c has already used up its two grues against ek a k j c d 1</p></li><li><p>Method *R,1 Fails on G2(e)with Oliver SchulteFeed c forever afterBy positive invariance, either never projects or forgets the refutation of c at j-1 k k j c d 1d </p></li><li><p>The Internal Problem of InductionNecessary condition for success by pos ord-invar methods:no data stream is a k-limit point of data streams as low as it after it has been presented for k stepsbad</p></li><li><p>The Internal Problem of InductionNecessary condition for success by pos ord-invar methods:no data stream is a k-limit point of data streams as low as it after it has been presented for k stepsbadgood</p></li><li><p>Corollary: Stacking LemmaNecessary condition for identification of Gn+1(e) by positively order-invariant methodsIf e is at the bottom level after being presented up to stage k, then some data stream e in Gn+1(e) - Gn (e) agreeing with the data so far is at least at level n+1</p></li><li><p>Corollary: Stacking LemmaNecessary condition for identification of Gn+1(e) by positively order-invariant methodsIf e is at the bottom level after being presented up to stage k, then some data stream e in Gn+1(e) - Gn (e) in agreeing with the data so far is at least at level n+1Why?</p></li><li><p>Corollary: Stacking LemmaNecessary condition for identification of Gn+1(e) by positively order-invariant methodsIf e is at the bottom level after being presented up to stage k, then some data stream e in Gn+1(e) - Gn (e) in agreeing with the data so far is at least at level n+1Else!</p></li><li><p>Even Stacking LemmaSimilarly for Gn+1even(e)</p></li><li><p>Even Stacking LemmaSimilarly for Gn+1even(e)Why?</p></li><li><p>Even Stacking LemmaSimilarly for Gn+1even(e)Else!</p></li><li><p>Method *F,n Fails on Gn(e)G0(e)G1(e)G2(e)G3(e)Gw(e)MinFlushJeffreyRatchLexConda = 2a = 2 a = 2yes yes yes yes yes yes yes yes a = 2 a = 2 nonononoa = wa = 2a = 2 a = n +1a = 3a = 2yes yes a = 0a = 0a = 0yes a = 1</p></li><li><p>Method *F,n Fails on Gn(e)Proof for a = 4: Suppose otherwiseBring e uniquely to the bottom</p></li><li><p>Method *F,n Fails on Gn(e)Proof for a = 4: Suppose otherwiseBring e uniquely to the bottome</p></li><li><p>04e ?e Method *F,n Fails on Gn(e)Apply stacking lemmaLet e be in G4(e) - G3(e) at or above level 4Let e be the same except at a the first place k where e differs from eFeed e forever afterk e</p></li><li><p>?e Method *F,n Fails on Gn(e)Timidity, s...</p></li></ul>