19
Learning from Inconsistencies in an Integrated Cognitive Architecture The First Conference on Artificial General Intelligence (AGI-08) March 1st, 2008 Kai-Uwe Kühnberger (with Peter Geibel, Helmar Gust, Ulf Krumnack, Ekaterina Ovchinnikova, Angela Schwering, Tonio Wandmacher) The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008 Kai-Uwe Kühnberger et al. Universität Osnabrück

Learning from Inconsistencies in an Integrated Cognitive Architecture

  • Upload
    palti

  • View
    66

  • Download
    0

Embed Size (px)

DESCRIPTION

Learning from Inconsistencies in an Integrated Cognitive Architecture. The First Conference on Artificial General Intelligence (AGI-08) March 1st, 2008 Kai-Uwe Kühnberger (with Peter Geibel, Helmar Gust, Ulf Krumnack, Ekaterina Ovchinnikova, Angela Schwering, Tonio Wandmacher). - PowerPoint PPT Presentation

Citation preview

Page 1: Learning from Inconsistencies in an Integrated Cognitive Architecture

Learning from Inconsistencies in an Integrated Cognitive

Architecture

The First Conference on Artificial General Intelligence (AGI-08)

March 1st, 2008

Kai-Uwe Kühnberger (with Peter Geibel, Helmar Gust, Ulf Krumnack, Ekaterina Ovchinnikova, Angela Schwering, Tonio Wandmacher)

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Kai-Uwe Kühnberger et al.

Universität Osnabrück

Page 2: Learning from Inconsistencies in an Integrated Cognitive Architecture

Overview

Introduction Learning in Cognitive Systems

The I-Cog Architecture General Overview of the System

Learning from Inconsistencies General Remarks Learning from Inconsistencies in Analogy

Making and the Overall System Conclusions

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 3: Learning from Inconsistencies in an Integrated Cognitive Architecture

Introduction

Learning in Cognitive Systems

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 4: Learning from Inconsistencies in an Integrated Cognitive Architecture

Learning

Usually cognitive architectures are based on a number of different modules. Example: Hybrid System

Obviously, coherence problems and consistency clashes can occur, in particular, in hybrid systems.

In hybrid architectures, two main questions can be asked: On which level should learning be implemented? What are plausible strategies in order to resolve

inconsistencies?

Idea of this talk: Use occurring inconsistencies as a mechanism (trigger) of learning.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 5: Learning from Inconsistencies in an Integrated Cognitive Architecture

The I-Cog Architecture

General Overview

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 6: Learning from Inconsistencies in an Integrated Cognitive Architecture

A Proposal: I-Cog

I-Cog is a modular system consisting of three main modules: Analogy Engine (AE):

Claim: AE is able to cover a variety of different reasoning abilities.

Ontology Rewriting Device (ORD): Claim: Ontological background knowledge needs to be

implemented in a way, such that dynamic updates are possible. Neuro-Symbolic Learning Device (NSLD):

Claim: The neuro-symbolic learning device enables robust learning of symbolic theories form noisy data.

Finally: these three modules interact in a non-trivial way and are governed by a heuristic-driven Control Device (CD).

Kühnberger, K.-U. et al. (2007): I-Cog: A Computational Framework for Integrated Cognition of Higher Cognitive Abilities, in Proceedings of MICAI 2007, LNAI 4827, pp. 203-214, Springer.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 7: Learning from Inconsistencies in an Integrated Cognitive Architecture

The Overall I-Cog Architecture

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 8: Learning from Inconsistencies in an Integrated Cognitive Architecture

Learning in I-Cog

Learning is based on occurring inconsistencies In the case of ORD, rewriting algorithms make sure that inconsistencies

are resolved (where this is possible). Ovchinnikova, E. & Kühnberger, K.-U. (2007). Debugging Automatically Extended Ontologies,

GLDV-Journal for Computational Linguistics and Language Technology, 23(2):19-33 .

NSLD is a learning device, where weights are adjusted based on backpropagation of errors.

Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning Models of Predicate Logical Theories with Neural Networks Based on Topos Theory, in P. Hitzler & B. Hammer (eds.): Perspectives of Neural-Symbolic Integration, Series “Computational Intelligence”, Springer, pp. 209-240.

In the case of AE, it is possible to reduce many adaptation processes to occurring inconsistencies.

Claim 1: Learning is distributed over the whole system. Claim 2: Learning takes place because errors / inconsistencies occur

triggering an adaptation process.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 9: Learning from Inconsistencies in an Integrated Cognitive Architecture

Learning from Inconsistencies

The Example of Analogical Reasoning

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 10: Learning from Inconsistencies in an Integrated Cognitive Architecture

General Remarks

Inconsistencies are classically connected to logic If for a set of axioms (relative to a language L) can be

entailed and can be entailed, then is inconsistent. We use the term “inconsistency” rather loosely and do not restrict

this concept to logic. Here are some examples: Every analogy establishes a relation that resolves a clash of

concepts, information, interpretations etc. Gust, H. & Kühnberger, K.-U. (2006). Explaining Effective Learning by Analogical Reasoning, 28th

Annual Conference of the Cognitive Science Society, pp. 1417-1422.

Ontology generation / learning Ovchinnikova, E., Wandmacher, T. & Kühnberger, K.-U. (2007). Solving Terminological

Inconsistency Problems in Ontology Design, IBIS 4:65-80.

Non-monotonicity effects in reasoning. Ovchinnikova, E. & Kühnberger, K.-U. (2006). Adaptive ALE-TBox for Extending Terminological

Knowledge, in Proceedings of AI’06, LNAI 4304, Springer, pp. 1111-1115.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 11: Learning from Inconsistencies in an Integrated Cognitive Architecture

The Analogy Engine

The Analogy Engine is based on Heuristic-Driven Theory Projection (HDTP). HDTP is a mathematically sound theory of computing analogies. It is based on anti-unification of a source theory ThS and a target

theory ThT. It was applied to various domains like naïve physics, metaphors,

geometric figures etc. Some features:

Complex formulas can be anti-unified. A theorem prover allows the re-representation of formulas. Whole theories can be generalized. The involved processes are governed by heuristics.

Gust, H., Kühnberger, K.-U. & Schmid, U. (2006). Metaphors and Heuristic-Driven Theory Projection (HDTP), Theoretical Computer Science, 354:98-117.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 12: Learning from Inconsistencies in an Integrated Cognitive Architecture

Recursion Example I

Source ThS: Addition Target ThT: Multiplication

1: x: add(x,0) = x

2: xy: add(x,s(y)) = s(add(x,y))

1: x: mult(x,s(0)) = x

2: xy: mult(x,s(y)) = add(x,mult(x,y))

Generalized Theory ThG:

1: x: OP1(x,E) = x

2: xy: Op1(x,s(y)) = Op2(Op1(x,y))

For the generalized theory, the following substitutions need to be established:

1: E 0, Op1 add, Op2 s

2: E s(0), Op1 mult, Op2 z.add(x,z)

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 13: Learning from Inconsistencies in an Integrated Cognitive Architecture

Recursion Example II

Source ThS: Addition Target ThT: Multiplication

1: x: add(0,x) = x

2: xy: add(s(y),x) = add(y,s(x))

1: x: mult(0,x) = 0

2: xy: mult(s(y),x) = add(x,mult(x,y))

Generalized Theory ThG:

1: x: Op(E,x) = x

Trying to anti-unify 1 and 1 is not possible. But by using axioms 1 and 2 we can derive

mult(s(0),x) = add(x,mult(0,x)) = add(x,0) = … = add(0,x)Hence we can derive: 3: x: mult(s(0),x) = x For the generalized theory, the following substitutions can be established:

1: E 0, Op add and 2: E s(0), Op mult

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 14: Learning from Inconsistencies in an Integrated Cognitive Architecture

Conclusion

Main claims: In cognitive architectures “inconsistencies” (as

used in the broad sense here) should be considered as a trigger for learning and adaptation.

These adaptation processes can be relevant for: Adapting background knowledge, Reasoning processes of various types, Neuro-based learning approaches.

Learning in the systems is therefore distributed and continuously realized.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 15: Learning from Inconsistencies in an Integrated Cognitive Architecture

Thank you very much!!

Questions?

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 16: Learning from Inconsistencies in an Integrated Cognitive Architecture

References

Analogical Reasoning (Selection) Gust, H., Kühnberger, K.-U. & Schmid, U. (2006). Metaphors and Heuristic-

Driven Theory Projection (HDTP), Theoretical Computer Science, 354:98-117.

Gust, H. & Kühnberger, K.-U. (2006). Explaining Effective Learning by Analogical Reasoning, in: R. Sun & N. Miyake (eds.): 28th Annual Conference of the Cognitive Science Society, Lawrence Erlbaum, pp. 1417-1422.

Gust, H., Krumnack, U., Kühnberger, K.-U. & Schwering, A. (2007). An Approach to the Semantics of Analogical Relations, in S. Vosniadou et al. (eds.): Proceedings of EuroCogSci 2007, Lawrence Erlbaum, pp. 640-645.

Krumnack, U., Schwering, A., Gust, H. & Kühnberger, K.-U. (2007). Restricted Higher-Order Anti-Unification for Analogy Making, to appear in Proceedings of AI’07, Springer.

Gust, H., Krumnack, U., Kühnberger, K.-U. & Schwering, A. (2008). Analogical Reasoning: A Core of Cognition, to appear in Künstliche Intelligenz 1/2008.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 17: Learning from Inconsistencies in an Integrated Cognitive Architecture

References

Neuro-Symbolic Integration (Selection) Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning and Memorizing Models

of Logical Theories in a Hybrid Learning Device, to appear in Proceedings of ICONIP 2007, Springer.

Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning Models of Predicate Logical Theories with Neural Networks Based on Topos Theory, in P. Hitzler & B. Hammer (eds.): Perspectives of Neural-Symbolic Integration, Series “Computational Intelligence”, Springer, pp. 209-240.

Ontology Rewriting (Selection) Ovchinnikova, E. & Kühnberger, K.-U. (2007). Debugging Automatically Extended

Ontologies, GLDV-Journal for Computational Linguistics and Language Technology, volume 23(2).

Ovchinnikova, E., Wandmacher, T. & Kühnberger, K.-U. (2007). Solving Terminological Inconsistency Problems in Ontology Design, International Journal of Interoperability in Business Information Systems, 4:65-80.

Ovchinnikova, E. & Kühnberger, K.-U. (2006). Adaptive ALE-TBox for Extending Terminological Knowledge, in A. Sattar & B. H. Kang (eds.): Proceedings of AI’06, LNAI 4304, Springer, pp. 1111-1115.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 18: Learning from Inconsistencies in an Integrated Cognitive Architecture

References

I-Cog Kühnberger, K.-U., Geibel, P., Gust, H., Krumnack, U., Ovchinnikova, E.,

Schwering, A. & Wandmacher, T. (2008): Learning from Inconsistencies in an Integrated Cognitive Architecture, to appear in Proceedings of AGI 2008, IOS Press.

Kühnberger, K.-U. (2007): Principles for the Foundation of Integrated Higher Cognition (Abstract). In: D. S. McNamara & J. G. Trafton (Eds.), Proceedings of the CogSci 2007, (p. 1796). Austin, TX: Cognitive Science Society.

Kühnberger, K.-U., Wandmacher T., Schwering, A., Ovchinnikova, E., Krumnack, U., Gust, H. & Geibel, P. (2007): I-Cog: A Computational Framework for Integrated Cognition of Higher Cognitive Abilities, in Proceedings of MICAI 2007, LNAI 4827, pp. 203-214, Springer.

Kühnberger, K.-U., Wandmacher, T., Schwering, A., Ovchinnikova, E., Krumnack, U., Gust, H. & Geibel, P. (2007): Modeling Human-Level Intelligence by Integrated Cognition in a Hybrid Architecture, in P. Hitzler, T. Roth-Berghofer, S. Rudolph: FAInt-07, Workshop at KI 2007, CEUR-WS, vol. 277, pp. 1-15.

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008

Page 19: Learning from Inconsistencies in an Integrated Cognitive Architecture

Members of the AI group

Peter Geibel

Karl Gerhards

Helmar Gust

Ulf Krumnack

Kai-Uwe Kühnberger

Jens Michaelis

Ekaterina Ovchinnikova

Angela Schwering

Konstantin Todorov Ulas Türkmen

Tonio Wandmacher

Kai-Uwe Kühnberger et al.

Universität Osnabrück

The First Conference on Artificial General Intelligence (AGI-08)

Memphis, March 1st, 2008