26
Article Truth maintenance in blackboard environment Mladen Stanojevic ´ *, Dus ˇ an Velas ˇ evic ´ ², and Sanja Vranes ˇ * *Mihailo Pupin Institute, Volgina 15, 11060 Belgrade, Yugoslavia mladenKlab200.imp.bg.ac.yu ²Faculty of Electrical Engineering, Bulevar revolucije 73, 11000 Belgrade, Yugoslavia Abstract: Constraint satisfaction and search problems generally fall into the class of problems for which a direct algorithmic solution does not exist. The solution of these problems requires the examination of state spaces. A problem solver (inference engine) alone is not able to organize and maintain state space consistently. For this purpose a so-called truth maintenance system is required. Our truth maintenance system (MEKON) organizes data within a data abstraction called a context. Each context corresponds to one problem state and contains currently believed data. The truth maintenance system provides believed data retrieval, belief revision, contradiction handling and non-monotonicity handling, the features that help a problem solver to examine state spaces. MEKON represents an ATMS-like system implemented within BEST (Blackboard-based Expert System Toolkit). However, some special MEKON features such as state variables, context sensitive generation of assumptions and explicit context generation, that are not present in standard ATMSs, facilitate not only the solution of constraint satisfaction problems, but also the solution of search problems (not provided by standard ATMSs). Being deeply integrated with Prolog/Rex, BEST’s knowledge representation language, and BEST’s inference engine, MEKON provides a simple and efficient means for the examination of state spaces. Facts, hypotheses (assumptions), and concepts (frames) are used to describe a problem state, contexts are used to represent points in the state spaces, while rules are used to perform state transitions. MEKON is the only truth maintenance system that provides truth maintenance capabilities on local blackboards thus enabling the 22 Expert Systems, February 1998, Vol. 15, No. 1 solution of complex problems which include different knids of real constraint satisfaction and search problems concerning diagnosis, allocation tasks, classification tasks, planning or scenario making. Keywords: artificial intelligence, blackboard systems, constraint satisfaction problems, expert systems, search problems, truth maintenance 1. Introduction Truth Maintenance Systems (TMSs) (Stanojevic ´ et al., 1994) were developed in an attempt to overcome the prob- lems that stem from the monotonicity in classical reasoning systems. The classical reasoning systems treat all data as true, and are only able to discover the new truths contained in known truths. However, these monotonic reasoning sys- tems cannot be used to solve the constraint satisfaction and search problems, because they are not able to cope with the dynamic changes of truth states (non-monotonicity in reasoning) caused by the state transitions. A TMS alone cannot be used to solve a constraint satis- faction or search problem. It must be coupled with a problem-solver, where believed data retrieval, belief revision, contradiction handling, and non-monotonicity handling are the TMS’s functions, while pattern-matching, conflict resolution, and rule activation are the inference engine’s functions. These functions can be implemented either explicitly, within a problem-solving tool, or implicitly, within a dedicated application tailored to the specific needs of the problem at hand. TMSs represent non-monotonic reasoning systems, applied mainly in constraint satisfaction and search prob- lems, but also in explanation systems and abductive reason- ing. Typically, they explore alternatives, make choices, explore the consequences of the choices and, if during this process a contradiction is detected, the truth maintenance system revises the working memory and gets rid of contra- dictions. There are a lot of applications that can use and benefit from the truth maintenance techniques, some of which are: O Planning systems, that use truth maintenance to detect the source of problems and to prevent the gener- ation of ill-formed plans O Allocation tasks, where the admissible or optimal allocations are determined O Classification tasks, where a member of a class is recognised upon a complete or incomplete, more or less accurate description

Truth Maintenance in Blackboard Environment

  • Upload
    unionu

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

ArticleTruth maintenance inblackboard environment

Mladen Stanojevic*, DusanVelasevic†, and Sanja Vranes*

*Mihailo Pupin Institute, Volgina 15, 11060Belgrade, YugoslaviamladenKlab200.imp.bg.ac.yu†Faculty of Electrical Engineering, Bulevarrevolucije 73, 11000 Belgrade, Yugoslavia

Abstract: Constraint satisfaction and search problemsgenerally fall into the class of problems for which adirect algorithmic solution does not exist. The solution ofthese problems requires the examination of state spaces.A problem solver (inference engine) alone is not able toorganize and maintain state space consistently. For thispurpose a so-called truth maintenance system is required.Our truth maintenance system (MEKON) organizes datawithin a data abstraction called acontext. Each contextcorresponds to one problem state and contains currentlybelieved data. The truth maintenance system providesbelieved data retrieval, belief revision, contradictionhandling and non-monotonicity handling, the featuresthat help a problem solver to examine state spaces.MEKON represents an ATMS-like system implementedwithin BEST (Blackboard-based Expert System Toolkit).However, some special MEKON features such as statevariables, context sensitive generation of assumptionsand explicit context generation, that are not present instandard ATMSs, facilitate not only the solution ofconstraint satisfaction problems, but also the solution ofsearch problems (not provided by standard ATMSs).Being deeply integrated with Prolog/Rex, BEST’sknowledge representation language, and BEST’sinference engine, MEKON provides a simple and efficientmeans for the examination of state spaces. Facts,hypotheses (assumptions), and concepts (frames) are usedto describe a problem state, contexts are used torepresent points in the state spaces, while rules are usedto perform state transitions. MEKON is the only truthmaintenance system that provides truth maintenancecapabilities on local blackboards thus enabling the

22 Expert Systems, February 1998, Vol. 15, No. 1

solution of complex problems which include differentknids of real constraint satisfaction and search problemsconcerning diagnosis, allocation tasks, classificationtasks, planning or scenario making.

Keywords: artificial intelligence, blackboard systems,constraint satisfaction problems, expert systems, searchproblems, truth maintenance

1. Introduction

Truth Maintenance Systems (TMSs) (Stanojevic´ et al.,1994) were developed in an attempt to overcome the prob-lems that stem from the monotonicity in classical reasoningsystems. The classical reasoning systems treat all data astrue, and are only able to discover the new truths containedin known truths. However, these monotonic reasoning sys-tems cannot be used to solve the constraint satisfaction andsearch problems, because they are not able to cope withthe dynamic changes of truth states (non-monotonicity inreasoning) caused by the state transitions.

A TMS alone cannot be used to solve a constraint satis-faction or search problem. It must be coupled with aproblem-solver, where believed data retrieval, beliefrevision, contradiction handling, and non-monotonicityhandling are the TMS’s functions, while pattern-matching,conflict resolution, and rule activation are the inferenceengine’s functions. These functions can be implementedeither explicitly, within a problem-solving tool, orimplicitly, within a dedicated application tailored to thespecific needs of the problem at hand.

TMSs represent non-monotonic reasoning systems,applied mainly in constraint satisfaction and search prob-lems, but also in explanation systems and abductive reason-ing. Typically, they explore alternatives, make choices,explore the consequences of the choices and, if during thisprocess a contradiction is detected, the truth maintenancesystem revises the working memory and gets rid of contra-dictions. There are a lot of applications that can use andbenefit from the truth maintenance techniques, some ofwhich are:

O Planning systems, that use truth maintenance todetect the source of problems and to prevent the gener-ation of ill-formed plans

O Allocation tasks, where the admissible or optimalallocations are determined

O Classification tasks, where a member of a class isrecognised upon a complete or incomplete, more orless accurate description

O Diagnostic systems, that explore different potentialdiagnoses

O Scenario making, where different scenarios are madeand examined to assess the consequences of possibledecisions and to choose the best one.

Those truth maintenance systems that record the depen-dencies between the deduced data and their antecedentsusing justifications, can give an explanation as to how eachderived datum has been evaluated. This kind of explanationcan be given after the problem-solver finishes its work.However, some TMSs are able to give the explanations fora datum, which hasn’t yet been derived, by performingabductive reasoning (Reiter & de Kleer, 1987). Theseexplanations are in the form of conjunctions of assumptionsand tell us under which minimal sets of assumptions a givendatum could be derived.

MEKON represents a truth maintenance systemimplemented within the BEST environment (BEST, 1992;Vraneset al., 1994; Vranesˇ & Stanojevic, 1995a). Its onlytask, within BEST, is to provide a means for the examin-ation of state spaces, therefore throughout this paper wewill emphasize those aspects of truth maintenance systemsthat facilitate the solution of constraint satisfaction andsearch problems. Explanation capabilities in BEST are pro-vided through a distinct explanation system, whileabductive reasoning is not provided at all.

BEST (Blackboard-based Expert Systems Toolkit), isaimed at providing the ability to produce large-scale evol-vable, heterogeneous intelligent systems. It incorporates thebest of multiple programming paradigms in order to avoidrestricting users to a single way of expressing either knowl-edge or data. It combines rule-based programming, object-oriented programming, logic programming, procedural pro-gramming and blackboard modeling in a single architecturefor knowledge engineering, so that the user can tailor astyle of programming to his or her application, using anyor arbitrary combinations of methods to provide a completesolution. The deep integration of all these techniques yieldsa toolkit more effective even for a specific single appli-cation than any technique in isolation or collections of mul-tiple techniques less fully integrated. Within the basic,knowledge-based programming paradigm, BEST offers amultiparadigm language for representing complex knowl-edge, including incomplete and uncertain knowledge.

Over the past 15 years many truth maintenance systemshave been developed. Martins & Shapiro (1988) divided allsuch systems into three classes: Justification-based TruthMaintenance Systems (JTMSs), Assumption-based TruthMaintenance Systems (ATMSs), and Logic-based TruthMaintenance Systems (LTMSs). These classes differ in theway they represent dependencies between the derived factsand the facts they have been derived from. In JTMSs thesedependencies are described using justifications. The justi-fications in JTMSs are used for the belief update, which is

Expert Systems, February 1998, Vol. 15, No. 1 23

required whenever some facts are added or deleted, but alsofor the contradiction handling. Assumptions in ATMSsdetermine the conditions under which the facts are valid.The assumptions represent the facts all other facts havebeen derived from. The dependencies between facts inLTMSs are described using logic-like clauses. TMS (Doyle,1979) and TRMS (Dearn & McDermott, 1987) representJTMSs, ATMS (de Kleer, 1986a, 1986b, 1986c), NATMS(de Kleer, 1988), EATMS (Dressler, 1988, 1989), MATMS(Inoue, 1990),ΠATMS (Dubois et al., 1990), HEART(Joubel & Raiman, 1990), SNeBR (Martins & Shapiro,1988),viewpointmechanism in ART (ART, 1987),worldmechanism (Filman, 1988) in KEE (KEE, 1986) andMEKON represent ATMSs, while McAllester’s LTMS(McAllester, 1980) and RMS (McDermott, 1991) rep-resents LTMSs.

However, none of these systems is tailored for appli-cation in a blackboard environment such as BEST. Further-more, these TMSs, except for the ones implemented withinART and KEE, are able to solve constraint satisfactionproblems, but not planning and scenario-making problems,which represents an unacceptable limit for a TMS usedwithin an expert system toolkit. Some TMSs have beenimplemented within a blackboard framework (Bharadwajet al., 1994; Engelmore & Morgan, 1988; Mitra & Dutta,1994), but they are not able to solve search problems andcomplex problems that contain many different subprob-lems.

Constraint satisfaction techniques (de Kleer, 1989;Kumar, 1992), which offer an efficient means for the sol-ution of some constraint satisfaction problems, find onlylimited use. They do not enable interleaving formulationand solving, therefore they are not able to solve allocationproblems in time. Furthermore, constraint satisfaction tech-niques cannot be used to solve problems with incompleteinformation (non-monotonicity handling is not provided),nor to solve planning and scenario-making problems. Forthe reasons given above, we were forced to develop ourown truth maintenance system.

The blackboard architecture represents an expert systemdesign in which several independent knowledge sourceseach examine a common working memory, called ablack-board. Newell (Engelmore & Morgan, 1988) was the firstto propose the blackboard architecture in the early sixties.He was inspired by the situation when several experts worktogether on a complex problem. A complex problem isdecomposed, and each expert solves one of the subprob-lems. They communicate with each other using a black-board. An expert starts solving his or her subproblem afterfinding all the necessary data on the blackboard. When thesubproblem is solved (supposing the expert knows how) heor she writes all the relevant partial results on the black-

TMART is a trademark of Inference Corp.TMKEE is a trademark of IntelliCorp., Inc.

board, so that other experts can use them to solve theirsubproblems. Sooner or later the solution of the overallproblem will be found. In the blackboard architecture ablackboard is used as a repository for all the partial resultsevaluated during the problem solving.

The blackboard architecture was applied for the first timein Hearsay-II (Ermanet al., 1980) speech understandingsystem. In the years since Hearsay-II a variety of black-board-based systems have been developed: DENDRAL(Lindsay et al., 1980), HASP/SIAP (Niiet al., 1982), SceneUnderstanding (Nagao & Matsuyama, 1979), CRYSALIS(Terry, 1983), BBI (Engelmore & Morgan, 1988), PRO-TEAN (Engelmore & Morgan, 1988), etc. The blackboardbased systems (Carver & Lesser, 1994) find their use invarious areas such as speech, signal and image understand-ing, planning and scheduling, structure identification, andmachine translation.

MEKON’s facilities will be presented in terms of solvingconstraint satisfaction and search problems. The tasks ofTMSs and inference engines (problem-solvers) in problem-solving are clearly separated and described in Section 2 aswell as some characteristics of three basic TMS types(JTMS, ATMSs, and LTMSs). In Section 3 we describebriefly Prolog/Rex, BEST’s knowledge representation lang-uage, blackboard architecture applied in BEST, BEST’sinference engine, MEKON’s subsystems and the inte-gration of MEKON with the inference engine. Section 4gives some implementation details, particularly the nodeimplementation in MEKON and the use of MEKON withina blackboard model, while Section 5 provides an exampleof how MEKON can be utilized in Air Force TacticalDecision Making. In Section 6, we mention some possibleapplications concerning constraint satisfaction and searchproblems.

2. Truth maintenance systems

The ability to deal with imprecise, incomplete and uncertaininformation is often required in solving different problems(Rolston, 1988). However, it is very difficult to implementthis ability within an expert system. There are many typesof uncertainty:

O uncertain knowledge— heuristic knowledge regard-ing some aspect of the domain. The expert is some-times only able to give a probable conclusion afterrecognizing a certain set of evidence.

O uncertain data — when data obtained from theobserved domain is not certain

O incomplete information — sometimes it is necessaryto make decisions in the absence of some information

O randomness— some domains are inherently random.These domains still have stochastic properties, eventhough the knowledge is complete and certain.

Reasoning based on these types of uncertainty requires

24 Expert Systems, February 1998, Vol. 15, No. 1

an approach different from the one based on predicate logic.Reasoning based on predicate logic is appealing because itis precise and rigorous. If there are no contradictions withinthe axioms, then the derived truths will never produce con-tradictions. The basic characteristic of predicate logic ismonotonic reasoning, because the number of derived truthsconstantly increases over time.

However, there are some problems that cannot be solvedusing predicate logic. This is the case when:

O information is incompleteO conditions change over timeO dead-ends can be reached in the reasoning process.

These problems can be solved using assumptions, whichare considered true in the lack of the evidence to the con-trary. To solve problems with incomplete information, non-monotonicity handling is required, when conditions changeover time, belief revision must be performed, and whendead-ends are reached, then a contradiction is detected andcontradiction handling must be invoked (typical forsearch problems).

New evidence can cause the change of belief states ofpreviously believed and disbelieved data, or generate con-tradictions. As a consequence, the set of currently believedassumptions, and data derived from them, must be revised.

If a truth maintenance system is added to an expert sys-tem, then it facilitates the solution of these kinds of prob-lems. TMS works as part of the knowledge base manage-ment system. It is invoked each time new evidence isobtained, or a new conclusion is derived by the inferenceengine. TMS then performs belief revision and changes thebelief states of the affected data, thus keeping the problemstate consistent. Notice that TMS plays a passive role,because it does not derive new conclusions. It maintainsthe set of data believed in the current problem state, thusenabling the inference engine to find and apply those rulesthat can contribute to the problem solution.

2. 1. TMSs’ role in problem-solving

Truth maintenance systems are implemented either ex-plicitly in problem-solving tools, or implicitly within theapplications that solve particular problems. Howver, TMSscannot be used alone to solve problems. A truth mainte-nance system must be coupled with an inference engine(problem-solver) to enable the solution of problems. Thetasks of these two systems in solving problems are clearlyseparated and defined.

All applications that solve constraint satisfaction orsearch problems have something in common that can beextracted and implemented within a tool that will be ableto solve different problems without programming every-thing from scratch for each new problem. Such a tool hasa structure as presented in Figure 1.

The knowledge representation language provides the

Figure 1: Structure of a problem-solving tool

means for describing how a particular problem can besolved. This description is written into the knowledge basein a form that can be used by the inference engine duringproblem solving. The knowledge representation languageis also used to set the initial problem state in the work-ing memory.

The inference engine performs reasoning using differentsearch strategies (depth-first, breadth-first, branch-and-bound, hill-climbing, A*, alpha-beta cutting, etc.). It is usedto guide the reasoning process towards problem solutions.The inference engine calls TMS’s functions to obtain dataneeded in reasoning, to write conclusions into the workingmemory, and to signal the contradictions.

TMS organizes data in working memory in such a waythat it is easy to say which data hold in one context andwhich data do not hold in the same context. The inferenceengine cannot derive valid conclusions if it uses mixed datafrom different problem states (representing differentcontexts). Contexts (defined either explicitly or implicitly)represent the main data abstraction handled by the TMSused to describe various problem states. Without the con-text abstraction, TMSs would not be able to describe prob-lem states for different problems. Besides its basic task, thebelief revision, that must be performed whenever a contextswitch occurs, the TMS performs contradiction handlingwhich enhances the efficiency of the overall system a greatdeal by pruning the unproductive branches of the decisiontree, thereby constraining the search space that must beexamined. In the case when the TMS is able to check if adatum does not hold in a context, it facilitates non-mono-tonicity handling and enables reasoning with the incom-plete information.

Working memory contains data needed to describe prob-lem states. Unlike the knowledge base which contains staticdata (knowledge about the problem-solving) that are notchanged during the problem-solving, working memory con-tains dynamic data (describing problem states) that reflectproblem state transitions.

Applications used to solve a particular constraint satis-

Expert Systems, February 1998, Vol. 15, No. 1 25

faction or search problem (e.g. a chess-playing program)usually have the structure presented in Figure 2. Althoughthese applications have been programmed from scratch, ineach of these applications we can easily identify the knowl-edge base, inference engine, truth maintenance system, andworking memory. For instance, in a chess-playing program,the knowledge base contains knowledge about openingsand figure-moving rules, the inference engine performsreasoning and applies alpha-beta cutting search strategy, theTMS organizes data within contexts (performs beliefrevision after each move has been drawn), whilst the work-ing memory contains board positions which the inferenceengine currently considers. Though the TMS is notimplemented explicitly within a chess-playing program,each chess-playing program must provide the typical TMSfunctions concerning belief revision. For instance, the infer-ence engine must obtain somehow the positions of figureswithin the currently considered position, and write a newposition after a move has been drawn.

As we have seen, some kind of TMS must beimplemented within each application that solves constraintsatisfaction and search problems. The number of TMSfunctions that must be implemented depends on the com-plexity of each problem. The minimal TMS configurationincludes functions that perform belief revision (reading datafrom a context, writing conclusions into a context). Whena context switch occurs, belief revision must be done todetermine which data hold in the current context. If a prob-lem has some inherent constraints that must be satisfied,they can be used to enhance the efficiency of the problem-solving method by pruning the decision tree. To exploitinherent constraints, a TMS must provide functions thatperform contradiction handling. Finally, TMS functionsthat perform non-monotonicity handling must be used toenable solution of problems with the incomplete infor-mation.

In order to give an answer to the question, when it wouldbe appropriate to use a tool and when to program an appli-cation to solve a problem, we will give a brief discussion

Figure 2: Structure of a problem-solving application

about the advantages and disadvantages of each approach.A problem-solving tool significantly reduces the timeneeded to implement an application, because it is necessaryonly to describe how the problem at hand can be solvedusing a knowledge representation language. If we havedecided to program the same application from scratch, then,besides the description of how the problem can be solved,we would have to program an inference engine, neededTMS functions and to find the most suitable data structureto organize data used in the reasoning process. However,an application programmed from scratch will undoubtedlybe more efficient than one implemented using a tool,because the inference engine, TMS functions and datastructures are tuned to the needs of that application.

2. 2 Basic TMS classes

According to the way they record dependencies betweenfacts, and the way they perform belief revision, contradic-tion handling and non-monotonicity handling, all TMSs canbe divided into three classes: Justification-based TruthMaintenance Systems (JTMSs), Assumption-based TruthMaintenance Systems (ATMSs), and Logic-based TruthMaintenance Systems (LTMSs).

Let us now see what are the basic ideas that underlie theJTMSs. As their name suggests, JTMSs use justificationsto record dependencies among nodes representing prop-ositions (Doyle, 1979). A node can be derived using a ruleor procedure. Such a rule or procedure will be fired orexecuted only if some preconditions are met, i.e. if someother nodes are currently believed (in), or disbelieved (out).Justifications are used to record the dependencies betweenthe justified nodes and their antecedents. A justificationcontains two lists —inlist and outlist, to describe whichnodes must be in, and which nodes must be out, to believea justified node. Premises hold universally, and their justi-fications contain emptyinlists and outlists. The justifi-cations for assumptions contain non-emptyoutlists.

26 Expert Systems, February 1998, Vol. 15, No. 1

Justifications in JTMSs are used for two reasons. Thefirst reason is the belief update, which is required wheneverthe belief state of a node changes. Justifications are usedto find all the affected nodes whose belief states are thenre-examined. As a result, some of these nodes will becomein, while others will become out. The second reason is thecontradiction handling. When an inconsistent set ofbelieved nodes is discovered by a problem-solver, thena corresponding justification for the contradiction isadded, and a dependency-directed backtracking system(implemented within a JTMS) invoked. This systemsearches through the dependency network (using justifi-cations and nodes) for the assumptions underlying the con-tradiction. It selects one assumption as the culprit and justi-fies one of the nodes from itsoutlist, thus removing thecontradiction.

We will use a simple allocation example to illustrate thework of the dependency-directed backtracking algorithm.Suppose that we have three tasks (t1, t2, and t3), two hosts(h1, h2) and one constraint —t1 and t2 must be dislocated.Suppose also that a problem-solver provides initially thefollowing nodes and justifications:

Index Proposition Justification

A allocate t1 to h1 h (), (B) jB allocate t1 to h2C allocate t2 to h1 h (), (D) jD allocate t2 to h2E allocate t3 to h1 h (), (F) jF allocate t3 to h2

Justifications contain two lists, where the first ones rep-resentinlists, and the second ones representoutlists(hinlist,outlistj). The assumptions A, C, and E are currently in,while nodes B, D, and F are currently out (these nodesinitially do not have supporting justifications). Usingassumptions A and C, a problem-solver can deduce a con-tradiction (taskst1 and t2 must be dislocated):

G CONTRADICTIONh (A, C), () j

The dependency network corresponding to this situationis presented in Figure 3. Arcs with a “+” tag designatemembers ofinlists, while arcs with a “−” tag designatemembers ofoutlists. Using this dependency network, thedependency-directed backtracking system is able to findassumptions underlying the contradiction (A and C). It sel-ects assumption C as a culprit and uses premise X toremove the contradiction:

X HANDLE h (), () j

As a consequence, JTMS makes A, D, and E in, and B,C, F, and G out.

In a JTMS there is only one context at a time. This con-text is uniquely determined by the set of assumptions cur-rently believed, and contains all nodes that can be derivedfrom them. A context switch occurs when a new assump-tion is introduced, or an old one (a culprit selected by thedependency-directed backtracking system) is disbelieved. Aproblem-solver works on data determined by the currentcontext.

The main difference between JTMSs and ATMSs lies inthe treatment of contexts. While in a JTMS there alwaysexists one context in which nodes can be in or out, in anATMS there exist many contexts at the same time. BothJTMSs and ATMSs facilitate the examination of thedecision tree, but ATMSs preserve all previously createdcontexts, while JTMSs preserve only the current context,and the information (justifications) used to recover when adead-end (contradiction) is reached. As a consequence ofthis main difference, JTMSs find only one problem solutionthat corresponds to the currently believed, consistent set ofassumptions, while ATMSs find all possible problem sol-utions for all consistent assumption sets. We can specify

Figure 3: Dependency network

Expert Systems, February 1998, Vol. 15, No. 1 27

one assumption set, and the ATMS will then select the sol-ution that corresponds to this assumption set. Anotherconsequence is that ATMSs do not use the dependency-directed algorithm in contradiction handling, which is con-sidered as an advantage, because this algorithm is time-consuming. The dependency-directed algorithm need not beused in ATMSs because each time a dead-end is reached,the reasoning process can be continued starting fromanother, previously created context.

An ATMS environmentis a set of assumptions (de Kleer,1986). Logically, an environment is a conjunction ofassumptions. A node holds in an environment if it can bederived from the set of assumptions and the set of justifi-cations (generated by a problem-solver and used to describedependencies between derived facts and their antecedents).An environment is inconsistent if false is derivable prop-ositionally (a contradiction is detected). The set formed byassumptions and nodes derivable from them comprises anATMS context.

An ATMS label is a set of environments associated withevery node. The label describes the assumptions the datumultimately depends on and unlike justifications, is con-structed by the ATMS itself. While a justification describeshow a datum is derived from immediately preceding ante-cedents, a label environment describes how the datum ulti-mately depends on assumptions. The basic task of theATMS is to guarantee that each label of each node is con-sistent, sound, complete and minimal. A label for a nodeis: consistentif all its environments are consistent,soundif the node is derivable from each environment of the label,completeif every consistent environmentE in which thenode is derivable, is a superset of some environmentE9 ofthe node’s label, andminimal if there are not two environ-ments in the node’s label such that one is the superset ofthe other.

When a problem-solver introduces a new justification for

a node, then an incremental change of the node’s label isevaluated using the labels of the node’s antecedents (givenin the justification). After this incremental change is madeconsistent and minimal, it is combined with the node’s oldlabel to make a new minimal node’s label. Using justifi-cations in which the node appears as an antecedent, the newlabels for the consequent nodes are evaluated. Note thatonce derived, a node is never re-derived in a new environ-ment. Only its label has to be changed.

Contradictions in the ATMS are handled using thenogood database. When a contradiction is detected, anenvironment containing it is added to thenogooddatabase.This environment (callednogood) and its supersets are thenremoved from all node labels. Thenogooddatabase is keptminimal thus preventing the examination of the environ-ments known to be inconsistent. In contradiction handlingthe ATMS uses environments instead of justifications (usedin JTMSs for this purpose).

We will describe contradiction handling in the ATMSusing the same allocation example we used to illustrate thedependency-directed backtracking algorithm. We will sup-pose that a problem-solver examines environments in adepth-first manner. It tries first to allocate taskt1 to hosth1

(assumption A) and thent2 to h1 (assumption C). However,it discovers and signals a contradiction to the ATMS. TheATMS adds the environmenthA, Cj to thenogooddatabasethus cutting the unproductive branch off the decision tree.The problem-solver tries then to allocatet2 to h2 (D), andfinally t3 to h1 (E, Figure 4). Notice that in the ATMS thereis no need for the dependency-directed backtracking algor-ithm. When the problem-solver detects a contradiction, itstops further evaluation of the unproductive branch andcontinues its work by examining other alternatives.

In order to provide non-monotonicity handling in theATMS Dressler (Dressler, 1988, 1989) proposedout-assumptions. An assumption Out(x) holds in all contextswhere x does not hold. Unlike the ordinary assumptions,which are not dependent on other assumptions, out-assump-tions are dependent on assumptions that characterize thecontexts holding these out-assumptions. The ATMS mustassure that x and Out(x) do not hold in the same context,and that for each context either x or Out(x) holds. Noticethat non-monotonicity handling in JTMSs must be used toremove the contradictions by disbelieving culprit assump-tions, while in the ATMSs it is used only to enable reason-ing with the incomplete information.

We have seen that JTMSs and ATMSs rely on justifi-cations to record dependencies between the nodes. LTMSsdo not use justifications (McAllester, 1980), but logic-styleclauses to specify constraints on the labels of the nodesthey relate, where each node corresponds to one problem-solver’s datum.

Each node in the LTMSs has two labels attachedlt/lf.The labellt can take valuesÁ or ' indicating that we knowthe node is true or we do not know the node is true. Simi-

28 Expert Systems, February 1998, Vol. 15, No. 1

larly, the label lf can take valuesÁ or ', indicating weknow the negated node is true or we do not know thenegated node is true. There are four possible node-labelsvalues:

Á/' the node is taken as true,'/Á the negated node is taken as true,'/' unknown truth value,Á/Á contradiction.

There are two ways of handling these labels. One is thecurrent context approach applied in JTMSs, where the labelvalues are evaluated according to the current assumptionset, while the second one is the assumption-based approachapplied in ATMSs, where the label on a literal reflects allcombinations of assumptions that make itÁ.

The contradictions in the LTMSs are resolved by useof the dependency-directed backtracking algorithm, or bymaintaining the nogood database when the node-labels areÁ/Á in one assumption set.

For monotonic belief revision LTMSs useBoolean con-straint propagation, while for non-monotonicity handlingan algorithm similar toGaussian eliminationcan be used(McDermott, 1991). Though Boolean constraint propa-gation is a very efficient algorithm, it cannot be used tosolve a problem with incomplete information.

3. MEKON in BEST environment

As a problem-solving tool BEST (Figure 5) offersProlog/Rex (Vranesˇ & Stanojevic, 1994), a knowledge rep-resentation language, an inference engine (Subasˇic et al.,1991; Vranesˇ & Stanojevic, 1995b), and MEKON, a truthmaintenance system. Prolog/Rex enables a user to definehow the problem at hand can be solved, and to establishthe initial state of the problem, the inference engine pro-vides the necessary mechanisms to perform reasoning,while MEKON provides a means for the solution of con-straint satisfaction and search problems. BEST would notbe able to solve constraint satisfaction and search problemseither without the inference engine, or without MEKON.Though they are closely interrelated, they have clearlydefined, distinct functions within the BEST environment.The capabilities of these two systems are integrated usingrules.

3. 1. Knowledge representation

In order to understand the conceptual basis of MEKON,we will give a brief overview of BEST’s knowledge baseand working memory organization. The knowledge inBEST is divided into declarative knowledge, used todescribe the problem’s domain of description, and pro-cedural knowledge, used to describe how the problem is tobe solved. The declarative knowledge in the working mem-

Figure 4: Decision tree

ory is organized in terms ofcontextsthat represent differentproblem states. Each context is implicitly defined by thecorresponding problem state, but is also explicitly definedby a context object. Prolog/Rex, as a knowledge represen-tation language, is used to define the declarative and pro-cedural knowledge in BEST.

Declarative knowledge The main portion of declarativeknowledge we want to handle in the system is broughttogether in a common category calledconceptwhich is aframe-based data abstraction (Fikes & Kehles, 1985; Min-ski, 1975), that provides the knowledge engineer with aneasy means of describing the types of domain objects thatthe system must model. Information about the objectdescribed in terms of theconceptis stored in theslots. Aslot value can be inherited, i.e. passed down the object hier-archy. A restriction onslot (value and type) provides auto-matic consistency checking. Eachslot declaration consistsof two parts — a name, and contents that may change underthe action of the rules or procedures.

Conceptsmay belong to many classes simultaneously,thereby giving rise to multiple inheritance hierarchies. Theinheritance network is declared by means of the two stan-dard inheritancerelations — is-a and instance-of. Is-arelation links one generalconcept to another, whileinstance-ofrepresents one of something, i.e. a particularobject that, of course, reflects all the properties which havebeen defined for their parentconcept, but also has theproperties that vary between individuals with the same par-ent. Prolog/Rex automatically creates the inverserelations(has-a, has-instances), which is a labor-saving feature. Fur-thermore, theis-a relationship is transitive, which meansthat declaration ofis-a relationships resulting from the tran-sitive composition of others can be omitted. It is obviousthat not all relations betweenconceptsshould produceinheritance. Moreover, not all the possible inheritance linksamongconceptscan be expressed with standard, system-

Expert Systems, February 1998, Vol. 15, No. 1 29

defined relations is-a, instance-of and their inverserelations has-aandhas-instances. Therefore, the possibilityof creating user-defined inheritance and otherrelations isprovided by Prolog/Rex.

Demonsrepresent some means of noticing and actingupon changes inslot value, necessary for significantevent detection. Thus, the execution of a specified func-tion can be triggered whenever aslot value is referenced,or whenever a new value is stored in aslot. Suchdemonscan automatically and invisibly compute values to beplaced in a slot (Prolog/Rex’s intrinsic inference).Demonsare also needed when values must be looked upin databases. They also participate in the expert systemreasoning activities by providing “automatic” inferenceas part of each assertion and retrieval operation. Theyalso provide a way of associating domain-dependentbehavior withconcepts, which, together with inheritanceand data abstraction, qualifiesconceptsas an object-ori-ented programming facility.

Factsare used to represent a simple relational statement.An hypothesisis used to define the assumption that

invokes the alternative pathway to the solution, i.e. thatchanges the problem state, throughcontextgeneration.

Procedural knowledge As declarative knowledge is usedto describe a problem state, procedural knowledge shouldprovide means for an efficient search through the problemstates. To provide an efficient way of examining the prob-lem states, we use rules. Prolog/Rex facilitates creation ofmany kinds of rules dedicated to different tasks: forward-chaining rules, backward-chaining rules, set-control rules,constraint rules, and Domain Knowledge Source ActivationRules (DKSARs).

BEST uses forward-chaining rules to solve constraintsatisfaction and search problems, therefore we will onlygive a brief overview of this kind of rule. Each forward-chaining rule has a precondition-action format.

Figure 5: BEST environment

In the rule premise there can appear several different pre-conditions that must be fulfilled, before the correspondingaction part is executed. The preconditions of interest are:patterns, concept preconditions, context preconditions,repeatedpreconditions, andglobal preconditions. They areall used for data retrieval.

One pattern can match one or morefacts, or hypothesesthat are valid in the current context. A hypothesis patternis defined by ahyp pattern. Negated patterns, used in non-monotonicity handling, are satisfied if in the current contextthere does not exist afact or hypothesisthat can match thispattern. Each pattern consists of terms that can be atoms,variables, integers, floating point numbers, lists or struc-tures (compound terms). OperatorK is used as an escape

30 Expert Systems, February 1998, Vol. 15, No. 1

to Prolog.Conceptprecondition enables the retrieval of theconcept slot values. Thecontextprecondition allows us tofind the context, which containsfacts, hypotheses, andcon-ceptsmatching the patterns in thecontextprecondition. Theroot context is the one that contains the initial state of agiven problem. All other contexts are its descendants. Therepeatedprecondition is similar to thecontextprecondition.It finds the context that lies on the path from the currentcontext to the root context that contains the same state asthe current context.Global precondition provides a meansfor the retrieval of information from the current context onthe global blackboard. By default, rules retrieve informationfrom the current context on the local blackboard. In fact,the global precondition switches the current blackboard

from local to global, and then, by using the preconditionsdescribed so far (different patterns,concept, context andrepeated preconditions), enables the retrieval of infor-mation from the current context on the global blackboard.

In the action part of a rule the following actions useMEKON’s functions; assert, retract, modify, tometa level, to user level, hypothesize, sprout, merge, glo-bal, believe (belief revision functions), andpoison(contradiction handling function).

The assertaction asserts a newfact, or a newconcept,or a new slot value (or values) in the current context andactivates the pattern matching mechanism in order to findall forward-chaining rules affected by the assertion. Theretract action retractsfacts, conceptsor slot values fromthe current context and activates the pattern matching algor-ithm. Themodifyaction changes the slot value of aconceptin the current context. It activates the pattern-matchingalgorithm twice, the first time for the retraction of the oldslot value, and the second time for the assertion of the newslot value. Theto user levelaction changes the level frommeta to the user level, while theto meta levelaction doesit vice versa. Thehypothesizeaction assertshypothesesina newly created context. Thesprout function has a similarusage. It creates a new context. Themergeaction mergescontexts, which are listed as terms of this structure. Theresulting context containshypotheses, facts, and conceptsfrom all the merged contexts. Themergeaction can be usedwhen partial solutions exist in different contexts and wewant to collect them into one context for obtaining a com-plete solution to a problem. The names of contexts contain-ing partial solutions can be obtained using the context pre-condition. Theglobal action allows changes on the globalblackboard. It switches first the current blackboard fromlocal to global and then executes the actions in the currentcontext on the global blackboard. It can execute all actionspermitted in the action part of a forward-chaining rule. Thepoison action destroys (poisons) a current context andtruncates the decision tree (made of contexts) when anundesired state occurs. Thebelieveaction prunes the con-text tree so that a current context becomes the new root.We may want this when some context contains facts sodesirable that we would like to continue solving a problemafter pruning the decision tree.

3. 2. Blackboard architecture

The architecture of the multiparadigm system developed byBEST (Figure 6 gives a schematic of it) consists of twogroups of individual computation agents, known as DomainKnowledge Sources (DKSs) and Control KnowledgeSources (CKSs), and two shared databases, known as theGlobal Domain Blackboard (GDBB) and the ControlBlackboard (CBB). A shell permits an heterogeneousknowledge sources structure, where each of them can beprogrammed within the most suitable paradigm. However,

Expert Systems, February 1998, Vol. 15, No. 1 31

no matter what representation is used, to every knowledgesource there corresponds a Domain Knowledge SourceActivation Rule (DKSAR) which has an if-then(precondition-action) format, where the if-part (DomainKnowledge Source Relevance Test — DKSRT) describessituations in which the knowledge source can contribute tothe problem-solving process, while the then-part (DomainKnowledge Source Invocation — DKSI) initiates theknowledge source execution. In the if-part, an event or aset of significant blackboard events the correspondingknowledge source is interested in, are declared. An eventmay occur whenever a specified blackboard item (concept,fact, hypothesis) is written, modified or deleted. The if-partcan contain arbitrarily complex tests, for instance a booleancombination of atomic tests for the presence (absence) ofcertainslot values,facts, hypotheses(or their combination).In the then-part, the computation (traditional procedure,logic program, object-oriented program, a set of SQLprimitives, and so on) or rule interpretation (whether asimple RBS, or a more sophisticated hypothetical or non-monotonic reasoning system) to determine a desired effect(to either create a new object or change an old one), isinitiated.

The framework organization allows these various inde-pendent and diverse sources of knowledge to be specifiedand their interactions co-ordinated so that they might co-operate with one another (through the global blackboard)to effect a problem solution. Knowledge sources reactopportunistically to the global blackboard changes pro-duced by other knowledge sources executions and, as aresult of its computation/cognition, produces new changes.

The Global Domain Blackboard (GDBB) is the centralcommunication and paradigm integration medium used asa repository for global data, partial solutions or pendingknowledge sources. When one knowledge source is active,it posts on the blackboard all the information that is ofimportance in solving the overall problem. This informationmay trigger some other knowledge sources.

Using different relations (system defined inheritancerelations and/or arbitrary user definedrelations) we candivide the global blackboard into several panels with manylevels of abstraction. Panels can be described as the collec-tions of objects that share some common properties. Forinstance, if we are playing a war game, one panel mayrepresent one side, and the other panel the other side. Inthis example we can also distinguish different levels ofabstraction, if we are tracing the decision making processfrom the Supreme Command to the Commands at the low-est level. For every Command we know to which level ofabstraction it belongs.

The global blackboard does not have a predeterminedstructure. A user can easily tailor the structure of the globalblackboard to perfectly fit a problem of interest. The globalblackboard is implemented using a common substrate,declarative data carriers of Prolog/Rex (facts, hypotheses,

Figure 6: Structure diagram

concepts, relations, demons,andcontexts), and the user isfree to use them in the way that is most convenient in solv-ing a particular problem. Since each knowledge source con-tains many internal hypotheses and partial solutions whichneed not (or even should not) be visible to other knowledgesources, each knowledge source can contain its own LocalDomain Blackboard (LDBB). A local blackboard can alsobe partitioned into panels containing different types of dataand can have three other dimensions (internal abstractionlevel, solution alternative and/or interval). All informationthat belongs to the local blackboard is local to that knowl-edge source, and cannot be used in other knowledgesources. We implement the local blackboards in order toavoid undesirable interference between knowledge sources,and to make the mapping to the distributed or multi-processor platform much easier and more beneficial. SinceBEST was conceived as a potentially distributed/multiexpert development environment from the very begin-ning, we have gone to some lengths to avoid (or at leastreduce) the blackboard information bottleneck. One way ofachieving this is just by providing a local blackboard,shared by the interrelated rules or procedures within a sin-gle knowledge source. Two knowledge sources can sharethe same local blackboard in two cases, when the namesof the domains are declared to be the same (the names ofthe local blackboards are the same), or when the domainname of the activated knowledge source isshared(the acti-vated knowledge source works on the local blackboard ofthe previous knowledge source).

Knowledge sources can also manipulate the data on the

32 Expert Systems, February 1998, Vol. 15, No. 1

global blackboard, and this is, at the same time, the onlypermitted way of communication between knowledgesources.

Knowledge sources can retrieve the data from the globaland local blackboards (working memory), or change theircontents, only by calling the MEKON functions. MEKONserves as a kind of interface between knowledge sourcesand working memory. It provides the truth maintenancecapabilities on the global and local blackboards, thusenabling the solution of complex problems concerning dif-ferent constraint satisfaction and search problems. A scen-ario making for a war game represents an excellentexample. First, various planning problems must be solvedfor the Commands at different levels for both sides, andthen, according to these plans, a scenario for each of theseCommands must be generated. At last, all these scenariosmust be adjusted to obtain a consistent, global scenario.Each of these problems has its own state space. StandardTMSs cannot cope with this problem, because they canexamine only one state space. However, in BEST each ofthese planning and scenario-making problems would besolved by a corresponding knowledge source on a differentlocal blackboard. These partial solutions could be used thento make a global scenario.

We distinguish two levels of built-in control in BEST.The first level of control affects the knowledge sourcescheduling and activation (Figure 6), while the second levelof control affects the rule scheduling and interpretation (seeSection 3.3). Whenever aconcept, fact or hypothesison theglobal blackboard is asserted, retracted, or modified (using

MEKON), the indexing scheme (a subsystem within theinference engine) is activated. It receives information aboutthe performed operation, and then tries to find a GlobalDomain Blackboard Index (GDBBI) that matches the givenconcept, fact, or hypothesis. The global index contains alist of all DKSARs whose if-parts are affected. After thefirst DKSAR is picked from the list, the control is trans-ferred to its if-part. If all the preconditions in the if-partare fulfilled, the Conflict Resolution (CR) mechanism gainscontrol. Its task is to put the DKSAR instance into theKnowledge Source Agenda (KSA) using the DKSAR’s pri-ority, and the value of a criterion (if one is defined), suchas the efficiency, reliability or value of a knowledge sourceor an heuristic function which combines these values. Theagenda is always kept sorted in decreasing order of pri-orities. When the conflict is resolved, control is returned tothe if-part in an attempt to satisfy the preconditions usingother data. If it succeeds, control is transferred again to theconflict resolution mechanism, otherwise control is returnedto the indexing scheme and the next DKSAR from the listis tried.

A Domain Knowledge Source Activation (DKSA) takesthe first DKSAR instance from the agenda and transferscontrol and the data collected in the DKSRT to the DomainKnowledge Source Invocation (DKSI) — then-part of thecorresponding DKSAR. Within the then-part, data either onthe global blackboard or on the local blackboard, can bechanged. If the data are changed on the global blackboard,control is transferred to the indexing scheme at the globallevel, and if they are changed on the local blackboard, con-trol is transferred to the indexing scheme at the local level,thus initiating the knowledge source execution. When eitherthe indexing scheme or knowledge source finishes its task,control is returned to the knowledge source activation, andthe control loop repeats. The control loop terminates whenthe agenda is empty or at the user’s request.

3. 3. Integration of inference engine with MEKON at thelocal level

Within BEST, MEKON is integrated with the inferenceengine at the local level (Figure 7), and represents the basisfor the implementation of all programming paradigms inBEST that rely on the use of rules or concepts. AlthoughMEKON is integrated with BEST’s inference engine, thesetwo systems represent two distinct parts with clearly separ-ated functions. As an interface between these two systems,we use rules, therefore MEKON can be used independentlyas a truth maintenance system with any other inferenceengine that utilizes the BEST’s internal form of rules.

MEKON’s functions are called either from the rulepremise, or from the action part. From the rule premise wecan check if a datum holds in the current context, whilefrom the action part we can write conclusions or performsome manipulations on contexts.

Expert Systems, February 1998, Vol. 15, No. 1 33

Inference engine’s structureIn order to understand theway MEKON is integrated with the inference engine, wewill briefly describe the structure of the inference engineat the local level. Three subsystems comprise the structureof the inference engine (Figure 8):

— indexing scheme— conflict resolution and— rule activation

The indexing scheme (Rowe, 1988; Vranesˇ & Stano-jevic, 1995b) is used to enhance the search through therules whose premises may be satisfied, considering the lastchange in the working memory. Hence, an unnecessarysearch through all rule premises is avoided, by checkingonly those rule premises that can be satisfied, thusconsiderably increasing the efficiency of the overallperformance.

BEST’s inference engine allows the use of differentsearch strategies (depth-first, breadth-first, hill-climbing,beam-search, branch-and-bound, best-first, A* (Pearl,1984) with or without priorities. It can happen that severalrule premises with the same priority are satisfied at thesame time, therefore inducing the need to determine theorder of execution of the corresponding action parts. Thatorder is dependent on the selected search strategy, and theconflict resolution subsystem (Subasˇic et al., 1991) isresponsible for handling this problem.

The rule activation subsystem selects one rule instancefrom the conflict set (according to the search strategy), andthen executes the corresponding action part. The conflictset contains all rule instances whose premises were satis-fied. This conflict set is represented by the agenda. Rulesin the agenda are described by rule instances and orderedaccording to their priority and the applied search strategy.A rule instance includes the data that are used within theexecution of the corresponding action part or for expla-nation purposes:

[name, priority, patterns, negatedpatterns, variables, context, fval,

sval]

Here,namerepresents the rule name,priority stands forthe rule priority,patternscontain all patterns satisfied inthe rule premise,negated patternsrepresent negatedpatterns from the rule premise,variablescontain the namesof variables used in the rule premise and their values, con-text represents the context in which the rule premise is sat-isfied, fval and sval hold the values of different functionsused in the heuristic search.

BEST’s inference engine provides both forward- andbackward-chaining reasoning, as well as their combination.

Figure 7: Interface between MEKON and inference engine

Figure 8: Inference engine’s structure

Figure 9: MEKON’s structure

34 Expert Systems, February 1998, Vol. 15, No. 1

MEKON’s structure MEKON consists of three subsys-tems (Figure 9):

— pattern-matching— working memory modification and— context handling

The pattern-matching subsystem checks the followingpreconditions from the rule premise:

— pattern— concept— contextand— repeated

The working memory modification subsystem performssome actions from the action part of rules:

— assert— retractand— modify

while the context handling subsystem provides the meansfor context manipulations including:

— hypothesize— sprout— poison— merge— believe— to user level— to meta leveland— global

Putting it all together MEKON does not represent an iso-lated whole within the BEST environment, but a functionalintegration with the BEST’s inference engine. In Figure 10we represent this integration by describing the control anddata flow between the inference engine and MEKON at thelocal level.

Using Figure 10 we will describe how the inferenceengine and MEKON work together. Suppose that at onemoment the local agenda contains the rule instances whosepremises were satisfied according to the current state of theworking memory. The rule activation subsystem selects oneof these rule instances, finds the corresponding action part,and initializes its execution. The action part of a rule callsthe functions from the working memory modification andcontext handling subsystems. Each modification of theworking memory invokes the indexing scheme. Theindexing scheme provides the means for finding those ruleswhose premises may be satisfied after the last workingmemory modification. From the rule premise, the pattern-matching system is called to check if in the current contextthere exist data (facts, hypotheses, slot values of concepts)that can satisfy the preconditions from the rule premise.Instances of all rules whose premises are satisfied are storedin an auxiliary agenda, and when all modifications of theworking memory are finished, the conflict resolution subsy-stem is called from the action part, and the rule instanceswritten from the auxiliary agenda to the local agendaaccording to a selected search strategy.

Expert Systems, February 1998, Vol. 15, No. 1 35

4. Implementation details

MEKON represents a kind of Assumption-based TruthMaintenance System (ATMS), though it has some peculiar-ities that distinguish it from other ATMSs. Unlike classicalATMSs, MEKON does not rely on the use of justifications.The characteristic that specifies MEKON as an ATMS, isthe use of contexts in the examination of problem states.

Each context represents a problem state. A set of hypoth-eses, facts, and concepts comprises the content of a context.Data defined in a context can be referenced only by know-ing the context name.

The three basic types of truth maintenance system canbe used to solve constraint satisfaction problems only.However, MEKON should provide the means for the sol-ution of typical search problems (such as planning andscenario-making problems). Besides the described TMSfunctions, MEKON must also provide such features as:

O state variablesO context sensitive generation of assumptions andO explicit context generation

to allow the solution of search problems. These featurescan be implemented most easily within an ATMS-like truthmaintenance system, therefore we have used the basicATMS ideas as a basis for MEKON.

A context-sensitive assumption is generated upon thecurrent problem state described by the state variables. Itrepresents a state transition that translates the current stateof the problem to the next state. The new values of thestate variables represent repercussions of the performedstate transition. While an “ordinary” assumption representsa decision about assigning a value to the state variable, acontext sensitive assumption represents a decision as towhich of the possible state transitions to perform.

In other ATMSs, contexts are implicitly determined bythe corresponding environments, while in MEKON they aregenerated explicitly. Assumptions are dependent on thecontexts, while in other ATMSs contexts are dependent onthe assumptions.

Notice that the dependencies between the derived dataand their antecedents are maintained explicitly within theclassical TMSs (JTMSs, ATMSs and LTMSs), while inMEKON these dependencies are recorded implicitly. Eachstate variable and each context sensitive assumptiondepends on only the contexts where it holds. The depen-dencies that would explicitly describe how a new value ofthe state variable is inferred, or how a context sensitiveassumption is generated are not recorded. They do notdepend explicitly on the antecedent state variables as wouldbe the case in the classical TMSs.

4. 1. Nodes

A node in MEKON that represents a datum (fact, hypoth-esis, or the slot value of a concept) is extended by the con-

Figure 10: Functional integration of MEKON and inference engine at the local level

text name in which the datum holds. For each datum in acontext there exists one node. In a system there can existseveral instances of the same datum defined in differentcontexts (Figure 11). This redundancy is used to gainefficiency (typical time-space tradeoff). We spend someadditional time when we copy the contents of the ante-cedent context into a new context, but we also save sometime for each retrieval, assertion, or deletion operation,because we need not examine the context tree to checkwhether a datum, asserted into one of the ancestor contexts,is valid in the current context, or not. The context namesare used as keys, therefore in an attempt to check the pres-ence of a datum in a context, a linear search through thedata within the given context has to be performed. For bigsearch spaces, we have provided automatic garbage collec-tion.

If, however, we have used one node for each datum validat the same time in different contexts (numbered in a listattached to the node), then we would have to perform thesearch through all data in the working memory. When wefind the required datum, we have to check the list of con-texts in which the datum holds. It is obvious that such anapproach would require less memory, but at the same time,it would require more time to check if a datum holds in acontext. Furthermore, the search through the list of contextswould take more and more time, as the number of con-texts increases.

36 Expert Systems, February 1998, Vol. 15, No. 1

All data in MEKON are represented using hash tables(Figure 12). We have defined one hash table for facts,another one for hypotheses, and a separate hash table foreach concept, where context names are used as keys. Whenwe create a new context, we have to copy the contents ofthe ancestor context, i.e. we have to find all facts, hypoth-eses and concepts valid in the ancestor concept and to cre-ate new nodes valid in the new context. We also have tocreate a new context object with the corresponding slotvalues, and to establish new links in the context tree.

Two opposite approaches, in the node implementation,are applied in the ART’sviewpoint mechanism (ART,1987) and MEKON. In MEKON there exist as manyinstances of a datum as there are contexts in which thisdatum holds. In ART, to each datum, an extension, describ-ing in which contexts the datum holds, is attached. Thisextension contains the name of a context where a fact hasbeen asserted, but also the names of the contexts where thisfact has been removed (Figure 13). Therefore, in ART onlythe changes are recorded which result in lower memoryconsumption. However, this lower memory consumptionhas to be paid for by each assertion, deletion, or retrievaloperation where an additional search through the contexttree must be performed for each instance that satisfies thecorresponding pattern in order to determine whether thisinstance is valid within the current context or not. As aconsequence of this additional search, ART’s performance

Figure 11: MEKON nodes

Figure 12: Hash tables

*A farmer is on his way home from the market. He has purchased a fox, a goat, and a cabbage. In order to get home, the farmer mustcross the river. The only available boat is so small that the farmer can take only one possession with him each time he crosses the river.The farmer’s plans are complicated by the fact that some of his purchases cannot be left alone together. The fox will kill the goat if leftalone with it. The goat will eat the cabbage if left alone with it.

Expert Systems, February 1998, Vol. 15, No. 1 37

Figure 13: ART nodes

is inevitably degraded as context tree grows, which is notthe case in MEKON. Unfortunately we were not able toperform measurements to support the above statements.

While in MEKON a check for the validity of a datumin a context requires a search through the data within thegiven context, in ART all a datum’s instances and theirextensions have to be examined. The examination includesa search of the path that leads from the current context tothe root context for a context (written in a datum’sextension) in which the datum is asserted. If on the pathfrom the context where the datum was asserted to the cur-rent context, there does not exist a context (written in thedatum’s extension) in which the datum is deleted, then itholds in the current context. Such a node implementation isespecially inefficient in the implementation of the negatednodes, because in order to find an extension of a negatednode, it is necessary to find all contexts where the datumdoes not hold.

4. 2. MEKON in blackboard model

MEKON is the first truth maintenance system implementedwithin the blackboard framework that enables:

O the solution of search problems

38 Expert Systems, February 1998, Vol. 15, No. 1

O the solution of several subproblems (including con-straint satisfaction and search subproblems) within acomplex problem.

MEKON is the only system that provides the truth main-tenance capabilities on the local blackboards. This featureenables the implementation of complex systems that cannotbe built using any other problem solving tool. There aretwo JTMSs reported in Bharadwajet al., 1994 and Mitra &Dutta, 1994) and one ATMS implemented in HEARSAYIII (Engelmore & Morgan, 1988), but they use global black-boards only. Each of these systems can be implementedusing one knowledge source in BEST. Furthermore, thesesystems can only solve constraint satisfaction problems,while MEKON, besides the solution of constraint satisfac-tion problems, allows the solution of general search prob-lems.

All blackboards in MEKON contain two levels —metaand the user’s level (Figure 14).Meta level contains onlyone context, while the user’s level can contain an arbitrarynumber of contexts. Facts and concepts on themeta levelare visible (true) in all contexts on the user’s level, andrepresent premises. Contexts are used to describe differentproblem states in time or different hypothetical situations.Contexts on the user’s level inherit all data from the con-texts-ancestors in the time of their creation. If necessary,

Figure 14: Blackboard layout

some data can be selectively deleted, and some moreasserted later. Contexts comprise the structure of global andlocal blackboards.

The names of contexts on the user’s level are automati-cally generated according to the names of the blackboardsand the state of the context counter attached to each black-board. When a new context is created, then its name isobtained by concatenating the name of the current black-board (global or local) and the current value of the contextcounter attached to that blackboard. The contexts on themeta levels are referenced asmeta contexts.

ART allows the creation of several user levels, whileMEKON provides only one. Multiple levels in ART areused for the logical separation of data (assumptions fromthe derived data). However, they do not recommend the useof more than two user levels because of the performancedegradation. Besides, multiple user levels induce anunnecessary complexity in rule writing that is hard tounderstand and to use. In MEKON, derived data andassumptions are logically separated, and defined on thesame, user level, thus making the rule writing and under-standing easier.

Expert Systems, February 1998, Vol. 15, No. 1 39

4. 3. Contexts

Context objects are implemented in terms of frames (Figure15). Relationsancestor ofanddescendant ofconnect con-texts in an hierarchical structure. Slotdepth contains thedepth of a context in a context tree (root context has depth0). Slotno mergecontains the names of the contexts withwhich the merging is forbidden.

As an example of a context we have chosen the contextfarmers dilemma4which describes one state (after theapplication of the first rule) of the farmer’s dilemma prob-lem, and contains the following nodes:

([farmer, on, shore 1], farmersdilemma4),([position, fox, shore 1], farmersdilemma4),([position, goat, shore 2], farmersdilemma4),([position, cabbage, shore 1], farmersdilemma4)

In the initial state (described by contextfarmers dilemma1), they were all onshore 1, and then the

Figure 15: Context object representation

farmer takes a goat toshore 2, thus producing a new prob-lem state described by the contextfarmers dilemma4.

After the complete generation of the context tree, thecontext objectfarmers dilemma4will have the followingslots:

(depth, 2)(ancestor of, [farmers dilemma24, farmersdilemma5]),(descendant of, [farmers dilemma2]),

which means thatfarmers dilemma4is on the second levelin the decision tree on the user’s level of thefarmersdilemmalocal blackboard, that it has two descendants,far-mers dilemma24and farmers dilemma5, and one ances-tor, farmers dilemma2.

Explicit merging is performed in the action part of theforward-chaining rule. If the merged context containshypotheses, facts or concepts that cannot coexist, it will bepoisoned (removed from the working memory).

4. 4. Justifications

When we compare de Kleer’s ATMS (de Kleer, 1986a,1986b, 1986c) with MEKON, we can notice that ATMSdoes not re-derive data, but changes the according labelinstead. Justifications are not used in MEKON, therefore adatum can be re-derived in a different context. The differ-ence between MEKON and ATMS lies in the treatment ofthe initial data. In ATMS, the initial data are representedby assumptions, where each assumption, at the beginning,comprises a separate context. In MEKON, all initial dataare present in a context that represents the root of a contexttree. If deriving a datum requires a lot of time, then it canbe evaluated in the root context, hence it will not be neces-sary to re-derive it, because it will be propagated across

40 Expert Systems, February 1998, Vol. 15, No. 1

the inheritance links to all context-descendants. In the casethat the evaluation of a datum does not take a long time,then the derivation time will not be much longer than thelabel propagation through the justification net in ATMS.

The main difference between theworld mechanism inKEE (Filman, 1988; KEE, 1986) and MEKON lies in theuse of justifications. Using justifications, KEE is able tomodify automatically the state of a context according to theassertion or deletion of a datum. By this characteristic, KEEis similar to JTMSs, though it represents an ATMS. How-ever, such additional flexibility has to be paid for bydegraded performances. MEKON does not provide such afeature, because ATMSs maintain the consistency of theknowledge base by operating on the consistent contexts.The problems that require the examination of the searchspaces do not demand the automatic modification of a con-text state, therefore this feature is not provided in MEKON.

5. An illustrative example

To illustrate the benefits of MEKON use within the black-board environment, we will describe an example from onespecialized C2 area — Air Force Tactical Decision Making(Vraneset al., 1992). Tactical decision making is the pro-cess of generating a sequence of actions posed by a militarycommander, involving the scheduling and utilization of airforces and other resources required to achieve predeter-mined military system goals, during simulated combatoperations.

While conventional decision support systems are usedfor retrieving, manipulating or summarizing data in a waythat assists human decision making, intelligent decision aidsystems are used for collecting, organizing and reapplyingknowledge, including decision rules and criteria. Therefore,we believe that the use of knowledge-based systems as adecision aid for interactive tactical decision making offers

several advantages to an unassisted human decision makeror even to the human/conventional-DSS combination. Theadvantages of the knowledge-based tactical decision aid areindisputable, since it:

O decreases the time needed for tactical decision makingO improves the decision quality by reducing errorsO generates, classifies and weights the response alterna-

tivesO provides a rapid feedback on decision consequencesO helps in the formulation of military doctrineO encourages seeking for new doctrinal procedures and

rulesO anticipates and helps prepare for contingenciesO assists a user to discriminate relevant from irrelevant

informationO assists parallel information processing and keeping

track of parallel eventsO facilitates communication among the command teamO interactively includes the human decision maker’s

judgment and knowledge in the resulting automatedoutput

O easily accommodates changes in the environment andthe decision making approach of the user

O provides a flexible, systematically organized andaccessible repository for the shared knowledge of vari-ous aspects of the decision making process

O allows the exploration of the working memory usingthe alternative reasoning strategies

O facilitates a rapid response to unforeseen changesallowing the implication of changes in hypothesis tobe reflected immediately

O allows a rapid application of qualitative criteria ordecision rules to a variety of scenarios, etc.

The decision making problem involved here is complex,ill-defined, highly dependent on the commander and suffersfrom incomplete information. To provide the automatedplanning aids to the commander and to improve the qualityof the decision, AI methods and overall architecture systemmust be chosen carefully. Therefore, we will first of allsummarize the basic characteristics of knowledge-basedtactical planning:

O the possibility of making decisions using an abstrac-tion hierarchy (i.e. a step-wise progression rather thansolving the whole problem at once)

O the possibility of reflecting air force command hier-archy

O some means of representing the warfare areas (a geo-graphic hierarchy and an information map describingand pinpointing the relevant terrain features)

O a conceptual abstraction hierarchy for the major airforce equipment classes

O the possibility of generating decision alternativesO the ability to rank the alternatives in terms of credi-

bility and exactness

Expert Systems, February 1998, Vol. 15, No. 1 41

O the ability to explain the rankingO some kind of temporal reasoningO the use of both data-directed and goal-directed reason-

ing, since the best performance is achieved when themost appropriate technique is chosen independentlyfor every subproblem at hand

O coherent means for representing different types ofknowledge, etc.

There are several kinds of knowledge in the tacticaldecision making system. First, we must distinguish declara-tive knowledge, that describes domain objects, and pro-cedural knowledge that represents the decision making pro-cesses and describes how to reason upon that declarativeknowledge. The most important types of the declarativedomain knowledge are:

O Task/mission description. The input that initiates thedecision making process is a description of a task orset of tasks to be accomplished by the forces andabsolute or relative times and contingency. The mis-sion description is almost always incomplete andneeds further refinement.

O Knowledge about the environment. A planner needs toknow critical, militarily significant aspects of the ter-rain (warfare area and its relation to map coordinates,important military objects/target positions,natural/artificial obstacles), and what its impact willbe on combat operations.

O Knowledge about the weather conditions. The mostimportant data for the air force are observation andvisibility, time of day, cloud bases, mobility due toweather, etc.

O Knowledge about equipment. Abstraction hierarchy ofthe most important military equipment with all thecharacteristics relevant to tactical decision makingmust be present in the system.

O Knowledge about the enemy. In order to make adecision for friendly forces, some data about theenemy must be estimated:— identified units— current disposition— condition and strength— size of reserves— logistic support, etc.

O Knowledge about friendly forces. The same data as forenemy forces are taken into account for friendlyforces, but the information quality is much higher.

O Knowledge about tactical plan structure. Tactical planskeletal abstraction hierarchy and its conceptualcomponents are known in advance — higher level rep-resents a conceptual coarse plan description and lowerlevel represents a more refined abstract plan descrip-tion. The forms of decisions addressing a set of keyplan issues are also well known.

All mentioned examples of conceptual knowledge which

describe the domain objects (whether physical, like militaryequipment or abstract/conceptual, like plans/decisions) canbe represented using Prolog/Rex in terms ofconcepts.

The human commander’s decision making process,which a knowledge-based system is trying to imitate, isanother, procedural part of knowledge in the system. Theplanning for every headquarters’ level is prescribed by theoperational doctrine, and almost always consists of the fol-lowing activities (Figure 16):

O Mission reception and analysis. A commander clari-fies the mission contained in the decision or orderdeveloped by the higher commander, and analyzestemporal, spatial, operational or any other kind of con-straints contained in the decision.

O Situation assessment. During this activity a com-mander (expert system) evaluates the battlefield situ-ation elements:J Terrain analysisJ Enemy estimate

— Personal estimate— Disposition estimate— Reinforcement estimate— Logistic estimate

J Friendly forces estimateThis situation assessment includes a great deal ofreasoning with uncertainty allowing preliminaryconclusions to be drawn and then evaluated(confirmed or rejected) when the contributing databecome more certain.

O Courses of action prediction. According to the esti-mated enemy capabilities, enemy intentions (i.e. themost likely enemy courses of actions) are predicted.

Figure 16: Tactical decision making activities

42 Expert Systems, February 1998, Vol. 15, No. 1

O Concept of the operation generation. A coarse plancontaining the concept of the operation is generatedfirst.

O Decision options preparation. Due to the existence ofthe situation alternatives and different plan genertionrules credibility values, different viable plan optionsare generated.

O Decision evaluation/revision. Evidence of the truth (ordenial) of a hypothetical decision stems from proba-bilistic estimates about more refined subproblems.

For all the above-mentioned steps, the doctrinal knowl-edge and operational rules prescribe the decision making.In the solution of this complex problem many commanders(experts) participate. Therefore the blackboard architecture,with its inherent flexibility and modularity, represents anatural choice for the automation of this problem.

Each of the activities can be implemented in BEST usinga separate knowledge source. Prolog/Rex offers a flexiblerule language (four kinds of domain rules) which helps theknowledge engineer to seize any type of commanderexpertise and provides a flexible control of reasoning pro-cesses (backward-chaining, forward-chaining, mixed-initiative, different search strategies).

The solution of two activities requires the use ofMEKON. The activity, courses of action prediction, rep-resents an allocation problem. Knowing the enemy forcesand possible targets, MEKON is used to explore the mostlikely enemy courses of action. The decision options prep-aration activity represents a scenario-making problem. Foreach possible enemy action a set of variable plan optionsshould be generated. These two problems require the exam-ination of two different search spaces, which cannot be per-formed using the standard TMSs. For the solution of theseproblems in BEST, two local blackboards would be used.The results of the first problem would be written onto theglobal blackboard and used as an input for the solution ofthe second problem.

6. Possible applications

There are many problems for which an algorithmic solutiondoes not exist. Two broad classes of problems, whose sol-ution requires examination of the search spaces, fall intothis category. Constraint satisfaction and search problemsrepresent these two classes from the problem-solving per-spective.

Constraint satisfaction problems (CSPs) (Kumar, 1992)comprise the first class. A constraint satisfaction problemis specified by a set of variables, and a set of constraintson subsets of these variables limiting the values they cantake. The solution of a CSP requires finding a point (orpoints) in a discrete hyperspace that satisfies all the con-straints. The search process begins by assigning a value tothe first variable. If any of the constraints is violated, a newvalue is assigned, otherwise the second variable is chosen

and initialized. Now, the values of the two variables arechecked against the set of constraints, and if any of theconstraints is violated, we try to find a new pair of valuesthat satisfies all the constraints. When such a pair is found,the process continues by assigning a value to the third vari-able and eventually terminates when all variables havevalues that do not violate any of the constraints. If such apoint in a state space does not exist, the corresponding CSPdoes not have a solution. Various kinds of allocation, classi-fication, and diagnostic problems (e.g. allocation of taskson a distributed or multiprocessor platform) are all rep-resentatives of CSPs.

The second class contains scenario- and plan-makingproblems (e.g.route finding problem), whose solutionrequires the search for a path of state transitions from theinitial to the final point (which does not represent a solutionof the search problem as in CSPs, but only a goal problemstate) in the search space. Notice that a solution for a prob-lem from the second class does not contain the appropriatevalues for all state variables as in CSPs, but a path of statetransitions. In the beginning, the initial and final problemstates, defined by the sets of initial and final values of statevariables, are known. Our task is to find a path of statetransitions (by applying rules that describe and perform allpermitted state transitions from one problem state toanother) from the initial to the final point (points) in thesearch space. This path must not contain inconsistent prob-lem states (those that violate any of the constraints). Duringthe CSP solution, the number of state variables changesincrementally until all state variables have the appropriatevalues, whereas the solution of a problem from the second

Figure 17: Decision tree for the allocation problem

Expert Systems, February 1998, Vol. 15, No. 1 43

class requires all the time the full number of state variableswhose values are changing by applying the transition rules.A single step in the CSP solution is performed by assigninga value to a state variable, whereas in solving the problemsfrom the second class, a single step is performed by execut-ing a rule that will change the current problem state intoanother problem state.

While a solution of the CSP is determined by a point inthe state space, a solution of the problem from the secondclass is determined by a path that leads from the initial tothe final point in the search space. Therefore we must useone approach to find a solution of the CSP and anotherapproach to find a solution of the problem from thesecond class.

A problem of allocating a set of tasks on a distributedplatform is a good representative of the allocation prob-lems. The tasks are heterogeneous in their requests for CPUtime, memory, special services, or I/O devices, while hostsin a distributed environment have different processing pow-ers, memories, disk spaces, I/O devices and offer differentservices. Furthermore, a solution must (or should) satisfysome of the following constraints:

O order of the tasks execution (determined by a directedacyclic graph)

O time limit for the execution of all tasks in a real-time application

O safe margins in hosts’ utilizationO dislocation of some tasks (in order to increase the

reliability of the application)O load balancing

O time limit for the off-line execution of the applicationthat allocates tasks (for a relaxed time limit, the appli-cation gives a better solution), etc.

Solving such an allocation problem includes makingassumptions about atomic allocations (e.g. allocatetaski tohostj) and incremental upgrading of the assumption set untilthe complete allocation, that satisfies all constraints, isfound. A decision tree (Figure 17) is obtained by solvingthe allocation problem for three tasks (t1, t2, and t3), twohosts (hi, andh2), and two constraints (t1 must be executedon the same host beforet2, and t1 must be dislocated fromt3). The crossed-out nodes correspond to the rejected allo-cations, where the first or the second constraint is violated.We notice, that even for such a small example, we haverelatively large state space with 48 nodes containing thecomplete allocations (though some of them are equivalent),and 6 nodes containing the satisfying allocations. Byincreasing the number of tasks and hosts, the search spacerapidly grows, thus making the allocation problem reallycomplex. Large state spaces cannot be examined exhaus-tively, therefore intelligent algorithms (such as the onesused in MEKON) must be used to reduce the number ofexamined states, and find the solution.

In this class of problem we can include the classificationproblems where assumptions about atomic allocations arereplaced by assumptions about the current description ofan object or state, and diagnostic problems in which theassumptions represent symptoms, and where the maximalconsistent set of symptoms is used to derive a diagnosis.These classification and diagnostic problems are completelyequivalent to the allocation problems, therefore we will notdescribe them in greater detail. The only difference lies inthe semantics of the sets of assumptions representing thesolutions. While in the allocation problems, these sets ofassumptions correspond to the complete allocations, in theclassification problems they represent consistent descrip-tions upon which the corresponding objects or states canbe classified, and in the diagnostic problems they representsets of symptoms characteristic of some diseases or faults.

Scenario-making is used to solve many problems con-cerning decision support. All kinds of games representexcellent examples of this class of problems. Each game isdescribed by the initial situation and the set of rules. Therecan be two or more players involved. Each move changesthe current state of the problem. In all these games, wehave a decision making problem when it is our turn to play.We have to consider the current state of the game, movesthat we can play, and then to try to predict the moves ofother game participants, assess the consequences of our andtheir moves, and make a decision. In Figure 18 we can seea decision tree for a chess game. The root node describesthe current position on the chessboard. It is our turn, andon the first level of decision tree all the reasonable andpermitted moves are presented. Each of these nodes rep-

44 Expert Systems, February 1998, Vol. 15, No. 1

resents a new position on the chessboard after we haveplayed the corresponding move. On the second level wecan see all the intelligent responses to our move, and soon. We have to cut the decision tree at some level (n inFigure 18), because of the combinatorial explosion ofstates, then to assess the positions at leveln, and to choosethe move that leads to the most promising states at leveln. Decision making for many real problems can be realizedin a similar way. For instance, we can play war games, ormake the strategic management decisions for a multi-product company upon scenarios about competitor’s moves(the Product Market Portfolio method (McNamee, 1987).The more accurate and precise our description of the prob-lem, the more reliable the decisions we make. Identificationof the applicable rules represents here the main problem.

A group of planning problems can be included in thesecond class of problems. Such a problem can be describedby a finite automaton with the set of nodes and state tran-sitions. The problem is to find a set of state transitions thatleads from the initial to the final state. This plan or pro-cedure represents an applicable, or even optimal solutionof the problem, if weights can be attached to each statetransition. Notice that if we have a large search space, aproblem cannot be solved by creating and examining allthe problem states and transitions. MEKON solves such aproblem by examining only a limited, relatively small num-ber of problem states. The well-knownfarmer’s dilemma,or city route planningare examples of such planning prob-lems. City route planning requires finding the shortest waybetween two intersections.

The standard TMSs can be used in the solution of con-straint satisfaction problems, while ART and KEE can beused to solve search problems too. However, these TMSsfacilitate only the solution of single constraint satisfactionor search problem. MEKON is the only TMS aimed atenabling the solution of complex problems that can consistof many constraint satisfaction and search subproblems. Wehave already mentioned a serious problem within the mili-tary domain concerning the decision making process, whichinvolves the solution of various classification, allocation,planning and scenario-making problems. Furthermore,within a CIM model of a flexible production system (FPS)there are many diagnostic, planning and scheduling prob-lems that must be solved to provide the optimal results.

MEKON can also be used in various business appli-cations to solve the combination of different planning andscenario-making problems.

7. Conclusions

Though MEKON in essence represents an Assumption-based Truth Maintenance System, it has some specificcharacteristics. It does not use justifications, therefore thereare as many instances of a datum as there are contexts inwhich this datum holds. As a consequence, we have

Figure 18: Decision tree for a chess game

improved the pattern-matching efficiency (though at theprice of higher memory consumption), provided easy non-monotonicity handling, and the means (state variables,explicit context generation, and context-dependent assump-tion generation) for solving search problems (e.g. farmer’sdilemma). MEKON is integrated within the BEST environ-ment, which also provides a knowledge representation lang-uage, inference engine, and user interface. Thus, BEST (aswell as ART and KEE) allows a simple use of TMS capa-bilities. SNeBR also provides a TMS, knowledge represen-tation language, inference engine, and user interface, but itcan be used to solve constraint satisfaction problems (withhuman assistance) only. Other TMSs offer interfaces to aninference engine, which can represent an advantage whena user just wants to add truth maintenance capability tohis or her own inference engine. MEKON is the first TMS

Expert Systems, February 1998, Vol. 15, No. 1 45

implemented within a blackboard model that provides truthmaintenance capabilities on local blackboards. This enablesthe solution of complex problems, with many distinct sub-problems (including constraint satisfaction and searchproblems), that cannot be solved using other problem-solv-ing tools. Furthermore, MEKON is the only TMS thatfacilitates the solution of various planning and scenario-making problems within a blackboard environment.

References

ART (1987)ART — Automated Reasoning Tool User’s Manual,version 3.0, Inference Corporation, Los Angeles, CA.

BEST (1992) BEST — Blackboard-based Expert System ToolkitProgrammer’s Guide, version 2.1, Mihailo Pupin Institute, Bel-grade, Yugoslavia.

Carver, N. andV. Lesser (1994) Evolution of blackboard con-trol architectures,Expert Systems with Applications,7, No. 1,1–30.

Bharadwaj, A., A. Vinze andA. Sen (1994) Blackboard archi-tecture for reactive scheduling,Expert Systems with Appli-cations,7, No. 1, 55–66.

Engelmore, R. andT. Morgan (1988) (Eds.),Blackboard Sys-tems, Addison Wesley Publishing Company, 281–296.

Erman, L., F. Hayes-Roth, V. Lesser and D.R. Reddy (1980)The Hearsay-II Speech-Understanding System: Integratingknowledge to resolve uncertainty,ACM Computing Surveys,No. 12, 213–253.

Dean, L. andD. McDermott (1987) Temporal Data Base Man-agement,Artificial Intelligence,32, No. 1, 1–55.

de Kleer, J. (1986a) An Assumption-based Truth MaintenanceSystem,Artificial Intelligence,28, No. 2, 127–162.

de Kleer, J. (1986b) Extending the ATMS,Artificial Intelli-gence,28, No. 2, 163–196.

de Kleer, J. (1986c) Problem Solving with the ATMS,ArtificialIntelligence,28, No. 2, 197–224.

de Kleer, J. (1988) A general labeling algorithm for Assumption-Based Truth Maintenance,Proceedings of the Seventh NationalConference on Artificial Intelligence, Los Altos, CA: MorganKaufmann Publishers Inc., 188–192.

de Kleer, J. (1989) A Comparison of ATMS and CSP Tech-niques,Proceedings of the Eleventh International Joint Confer-ence on Artificial Intelligence, Los Altos, CA: Morgan Kauf-mann Publishers Inc., 290–296.

Doyle, J. (1979) A Truth Maintenance System,Artificial Intelli-gence,12, 231–272.

Dressler, O. (1988) Extending the Basic ATMS,Proceedings ofECAI-88: Eighth European Conference on Artificial Intelli-gence, London, UK: Pitman Publishing, 535–540.

Dressler, O. (1989) An Extended Basic ATMS, inProceedingsof the Second International Workshop on Non-MonotonicReasoning, Reinfrank, de Kleer, Ginsberg, and Sandewall(Eds.), Lecture Notes in Artificial Intelligence 346, Heidelberg,Germany: Springer-Verlag, 143–163.

Dubois, D., J. Lang and H. Prade (1990) Handling UncertainKnowledge in an ATMS using Possibilistic Logic,Proceedingsof the 1990 Workshop on Truth Maintenance Systems, Martinsand Reinfrank (Eds.), Heidelberg, Germany: Springer-Verlag.

Fikes, R. and T. Kehles (1985), The role of Frame-Based Rep-resentation in reasoning,Communications of the ACM,28, (9),904–920.

Filman, R. (1988) Reasoning with Worlds and truth Maintenance,Communications of the ACM,31, No. 4, 382–401.

Inoue, K. (1990) Generalizing the ATMS: A model-basedapproach, ICOT Technical Memorandum TM-621, Tokyo,Japan: ICOT Research Center.

Joubel, C. andO. Raiman (1990) How Time Changes Assump-tions,Proceedings of ECAI-90: Ninth European Conference onArtificial Intelligence, London, UK: Pitman Publishing, 378–383.

KEE (1986)KEE Software Development System User’s Manual,IntelliCorp Inc., Mountain View, CA.

Kumar, V. (1992) Algorithms for constraint-satisfaction prob-lems: A survey,AI Magazine13, No. 1, 32–44.

Lindsay, R., B. Buchanan, E. Feigenbaum and J. Lederberg(1980)Applications of Artificial Intelligence for Organic Chem-istry: The Dendral Project, McGraw-Hill, New York.

46 Expert Systems, February 1998, Vol. 15, No. 1

Martins, J. andS. Shapiro (1988) A model for belief revision,Artificial Intelligence,35, No. 1, 25–79.

Martins, J. (1991) The truth, the whole truth, and nothing butthe truth,AI Magazine11, No. 5, 7–25.

McAllester, D. (1980) An outlook on Truth Maintenance, AIMemo 551, Massachusetts Institute of Technology, AI Lab.,Cambridge, MA.

McDermott, D. (1991) A general framework for reason mainte-nance,Artificial Intelligence,50, No. 3, 289–329.

McNamee, P. (1987) Tools and Techniques for Strategic Man-agement, Pergamon Press.

Minski, M. (1975) A Framework for Representing Knowledge,in P. Winston (Ed.),The Psychology of Computer Vision,McGraw-Hill.

Mitra, S. andA. Dutta (1994) Integrating optimization modelsand human expertise in decision support tools,Expert Systemswith Applications,7, No. 1, 93–108.

Nagao, M. and T. Matsuyama (1979) Structured analyses ofcomplex photographs,Proceedings of the 5th InternationalJoint Conference on Artificial Intelligence, 790–800.

Nii, P., Feigenbaum, E. J. Anton andA. Rockmore (1982) Sig-nal-to-Symbol transformation: HASP/SIAP case study,AIMagazine2, No. 3, 23–35.

Pearl, J. (1984) Heuristics, Addison Wesley Publishing Com-pany.

Rolston, D. (1988) Principles of Artificial Intelligence andExpert Systems Development, McGraw-Hill.

Rowe, N. (1988)Artificial Intelligence through Prolog, Prentice-Hall Inc.

Reiter, R. and J. de Kleer (1987) Foundations of assumption-based Truth Maintenance Systems,Proceedings of the SixthNational Conference on Artificial Intelligence, Los Altos, CA:Morgan Kaufmann Publishers Inc., 183–188.

Stanojevic´, M., S. Vranes

ˇandD. Velas

ˇevic

´(1994) Using the

Truth Maintenance Systems: A tutorial,IEEE Expert, 9(6),46–56.

Subasic, P., M. Stanojevic and S. Vranes (1991) AdaptiveControl in Rule Based Systems, in Moudni, Bourne, and Tzaf-estas (Eds.),Proceedings of IMACS Symposium Modelling andControl of Technological Systems, Lille, France: Gerfidn,764–769.

Terry, A. (1983) The CRYSALIS Project: Hierarchical Controlof Production Systems,Technical Report HPP-83-19, StanfordUniversity, Heuristic Programming Project.

Vranesˇ, S., M. Stanojevic

´, M. Lucin, V. Stevanovic

´and P.

Subasic (1994) A Blackboard Framewrk on top of Prolog,Expert Systems with Applications,7, No. 1, 109–130.

Vranesˇ, S. and M. Stanojevic

´(1994) Prolog/Rex — a way to

extend Prolog for better knowledge representation,IEEE Trans-actions on Knowledge and Data Engineering,6, No. 1, 22–37.

Vranes, S. and M. Stanojevic (1995a) Integrating multipleparadigms within the Blackboard Framework,IEEE Trans-actions on Software Engineering,21(3), 244–262.

Vranes, S. and M. Stanojevic (1995b) Prolog/Rex InferenceEngine,Expert Systems,12, No. 3, 239–252.

Vranes, S., M. Lucin, M. Stanojevic, V. Stevanovic and P.Subasic (1992) Blackboard metaphor in tactical decisionmaking, European Journal for Operational Research,61,Nos. 1–2, 86–97.

The authors

Mladen Stanojevic

Mladen Stanojevic´ has been a research scientist in the Com-puter Systems Department at the Mihailo Pupin Institute inBelgrade since 1989. His major research interests includeknowledge representation techniques, pattern-matchingtechniques, knowledge-based systems, blackboard systems,truth maintenance, and fuzzy logic. He received the Dipl.Ing. in 1989 in computer science, and the MS in 1994 incomputer science from the University of Belgrade.

Dusan Velasevic

Dusan Velasevic is a professor of computer science at theUniversity of Belgrade. His research interests include com-pilers, multiprocessor and distributed operating systems,software engineering, and expert systems. He received hisDipl. Ing. in electrical engineering in 1963, and a PhD incomputer science in 1971 from the University of Belgrade.

Expert Systems, February 1998, Vol. 15, No. 1 47

Sanja Vranes

Sanja Vranesˇ received the Dipl. Ing., MSc, and PhDdegrees in electrical engineering from the University ofBelgrade, Yugoslavia. She has worked as a Research Scien-tist at the Mihailo Pupin Institute for Automation and Tele-communications in Belgrade since 1980. She took a oneyear sabbatical leave from 1993 through 1994 to work atthe Advanced Manufacturing and Automation ResearchCentre (AMARC) at the University of Bristol, England. Herprimary research interests are in the fields of multiparadigmprogramming, knowledge-based systems design, black-board systems, knowledge representation, truth mainte-nance, and machine learning, in which she’s publishedmore than 40 papers.

Dr Vranesis a member of the IEEE Computer Society andthe AAAI.