Journal of Computer and Communications
Vol.03 No.08(2015), Article ID:58752,8 pages
10.4236/jcc.2015.38002

A Dialogue System for Coherent Reasoning with Inconsistent Knowledge Bases

Silvio do Lago Pereira, Luiz Felipe Zarco dos Santos, Lucio Nunes de Lira

Department of Information Technology, FATEC-SP/CEETEPS, São Paulo, Brazil

Email: slago@fatecsp.br, luiz.santos58@fatec.sp.gov.br, lucio.lira@fatec.sp.gov.br

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 3 July 2015; accepted 9 August 2015; published 12 August 2015

ABSTRACT

Traditionally, the AI community assumes that a knowledge base must be consistent. Despite that, there are many applications where, due to the existence of rules with exceptions, inconsistent knowledge must be considered. One way of restoring consistency is to withdraw conflicting rules; however, this will destroy part of the knowledge. Indeed, a better alternative would be to give precedence to exceptions. This paper proposes a dialogue system for coherent reasoning with inconsistent knowledge, which resolves conflicts by using precedence relations of three kinds: explicit precedence relation, which is synthesized from precedence rules; implicit precedence relation, which is synthesized from defeasible rules; mixed precedence relation, which is synthesized by combining explicit and implicit precedence relations.

Keywords:

Defeasible Reasoning, Inconsistent Knowledge, Precedence Relation, Dialogue System

1. Introduction

A knowledge base is a set of rules representing the knowledge of an expert in a specific domain. Traditionally, the Artificial Intelligence (AI) community assumes that a knowledge base must be free of inconsistency; otherwise, it turns out to be useless for an automated reasoning system. This assumption is motivated by the ex falso quodlibet principle [1] , which establishes that “from a falsehood, anything follows”. According to this principle, an inconsistent knowledge base should force an automated reasoning system to collapse.

Despite that, there are many practical applications of automated reasoning where, due to the existence of rules with exceptions, inconsistent knowledge must be used (e.g., law, politics, and medicine) [2] . For example, let be a knowledge base with the following pieces of knowledge: “penguins do not fly”, “birds fly”, and “Tweety is a bird”. Then, since there is no counter evidence, it is coherent to infer “Tweety flies” from. Now, suppose that the new piece of knowledge “Tweety is a penguin” is inserted into, resulting in a new knowledge base. Then, both “Tweety flies” and “Tweety does not fly” can be inferred from, and that is not a coherent reasoning. One way of restoring the consistency of is to withdraw one of its conflicting pieces of knowledge [3] , but this will destroy part of the knowledge. A better alternative would be to give precedence to the exception “penguins do not fly”. In this case, only “Tweety does not fly” can be coherently inferred from. Indeed, by using precedence relations, coherent reasoning in presence of inconsistency turns out to be possible.

In the last decades, reasoning with inconsistent knowledge has attracted great interest in the AI community. Nowadays, argumentation [4] is a common approach for coherent reasoning in presence of inconsistency, and several different formal models of argumentation have been proposed in the literature (e.g., [5] - [8] ).

This paper proposes a system for coherent reasoning, based on dialogical argumentation and defeasible reasoning, which resolves conflicts by using precedence relations of three kinds: explicit precedence relation, which is synthesized from precedence rules; implicit precedence relation, which is synthesized from defeasible rules; mixed precedence relation, which is synthesized by combining explicit and implicit precedence relations.

The paper is organized as follows: Section 2 introduces the fundamentals of defeasible reasoning and explains how the three kinds of precedence relations are synthesized in our system; Section 3 describes the dialectical proof procedure on which our system is based; Section 4 presents some features of the dialogue system prototype implemented in Prolog; finally, Section 5 presents the conclusion of the paper.

2. Background

In this section, we start by defining the language used to specify knowledge bases in our dialogue system; then, we present the principles of defeasible reasoning with inconsistent knowledge; and, finally, we discuss how to synthesize three different kinds of precedence relations from the information declared in a knowledge base.

2.1. Knowledge Representation

An atom denotes an atomic proposition. A literal is an atom or a negated atom. Two literals and are complementary literals if and, or and. The literal denotes

a true proposition and it has no complementary literal. A conjunction is an expression, where each is a literal. Technically, a conjunction is only a syntactic sugar notation for the set. Particularly, the trivial conjunction denotes the set.

A defeasible rule is an expression, where is a conjunction, called antecedent, and is a literal, called consequent. Intuitively, a defeasible rule states that the literals in are reasons to believe in, if there is no evidence contrary to. A defeasible rule is consistent if the set has no complementary literals. A defeasible rule is a presumption. A labeled defeasible rule is an expression, where is a defeasible rule and is a unique label identifying.Two labeled defeasible rules and are called conflicting defeasible rules, denoted by, if they have complementary consequents. Evidence against the consequent of a defeasible rule can emerge from its conflicts with other defeasible rules.

A precedence rule is an expression, where and are labels of conflicting defeasible rules, stating that the rule precedes the rule (i.e., that the priority of rule is higher than the priority of the rule). Since precedence rules do not involve atoms of the logical language, they are considered as meta-knowledge, whose only purpose is to provide information necessary to resolve conflicts between defeasible rules.

A knowledgebase is a finite set of consistent labeled defeasible rules and precedence rules. For example,

is a knowledgebase, where, , and stand, respectively, for “penguin”, “bird”, and “fly”. In this knowledge base, the defeasible rule states that “birds fly”, the defeasible rule states that “penguins do not fly”, and the precedence rule states that the defeasible rule 3 has precedence over the defeasible rule 2.

2.2. Defeasible Reasoning

As already said, a defeasible rule states that the literals in are reasons to believe in the literal, if there is no counter evidence to. In this context, the symbols, and are not interpreted as in

classical logic, since neither modus ponens (i.e.,), nor modus tollens (i.e.,)

holds for defeasible rules. In fact, even when the antecedent of a defeasible rule is true, its consequent may be false.

Defeasible reasoning is based on an inference rule called modus non excipiens [9] . This inference rule differs from modus ponens because it has an implicit premise stating that the consequent of a defeasible rule follows from its antecedent, provided that there is no exception to the rule. Therefore, defeasible reasoning is a kind of reasoning that produces only a contingent demonstration of a literal. Anyway, a necessary (although not sufficient) condition to believe in a literal is that it can be, at least, defeasibly derived from the knowledge base.

A defeasible derivation tree of a literal from a knowledge base, denoted by, is a tree such that:

§ The root of is labeled with the literal.

§ For each node of labeled with a literal, there exists a defeasible rule.

§ If, then the node labeled with is a leaf in; otherwise, if, then that node has exactly children nodes, which are labeled with, respectively.

A defeasible derivation tree is generated by a backward search procedure, similar to SLD-refutation [10] . For example, a defeasible derivation tree of the literal from is depicted in Figure 1.

A literal is defeasibly derivable from if, and only if, there exists a defeasible derivation tree. For example, as shown in Figure 2, both literals (“Tweety flies”) and (“Tweety does not fly”) are defeasibly derivable from the knowledge base.

Notice that defeasible derivation is a monotonic process, since the extension of with new knowledge cannot avoid the derivation of previously derived literals. Nevertheless, defeasible reasoning is a non-monotonic process, since the extension of with new knowledge can make a previously coherent conclusion becomes incoherent, and vice-versa. For example, consider the following knowledge base:

where, , , and stand for “chicken”, “bird”, “fly”, and “scared”, respectively. Clearly, both and are defeasibly derivable from, since and.

However, because, is considered stronger than and, hence, only is a coherent conclusion from. In other words, arguments and attack each other, but defeats. Now, suppose that

is extended, becoming. Then, a third argument

can be constructed based on the extended and, since, the new argument defeats, and reinstates. As a result, the previously coherent conclusion becomes an incoherent conclusion, and the previously incoherent conclusion becomes a coherent conclusion. This idea is illustrated in Figure 3.

It is worthy noticing that, without the precedence rules and, the conflicts between the arguments could not be resolved and, consequently, neither, nor could be accepted as a coherent conclusion from. When two conflicting defeasible rules have the same strength, we say that they block each other.

Figure 1. Defeasible derivation tree of u from..

Figure 2. Defeasible derivation trees of and from.

Figure 3. Attack, defeat and reinstatement.

2.3. Precedence Relations

Let be the set of labels used in a knowledge base. A strict partial order over is a binary relation such that (irreflexivity), and if and, then (transitivity), for all. Clearly, if is an irreflexive and transitive relation, it is also an asymmetric relation (i.e., if, then).

Let be the set of precedence rules explicitly declared in. We assume that the transitive closure of, denoted by, is a strict partial order over. Moreover, since precedence rules between non-conflicting defeasible rules are useless, we define. Indeed, the set is an explicit precedence relation over defeasible rules declared in. For example, consider the following knowledge base:

where, , , , , and stand for “animal”, “fly”, “winged”, “chicken”, “scared”, and “bird”, respectively. Let be. Then, we have:

§

§

§

An implicit precedence relation over defeasible rules declared in, based on the criterion of specificity [11] , can also be defined. In this work, we adopt a criterion of specificity that favors two aspects of a defeasible rule: precision (amount of information in the rule’s antecedent) and conciseness (number of steps to reach the rule’s antecedent). Let and be conflicting defeasible rules in; let

be the set of defeasible rules of that are not presumptions; let

be a knowledge base where all presumptions are reasons to believe in; and let be a knowledge base where all presumptions are reasons to believe in. Then is more specific than, denoted by, if and only if each literal is defeasibly derivable from, and there is at least one literal that is not defeasibly derivable from. Intui-

tively, means that the antecedent of can be derived from the antecedent of, but not the other way around (i.e., is an exception of). For example, considering, is more specific than (since is derivable from, but is not derivable from). Intuitively, rule 4 is more precise than rule 3. Analogously, is more specific than (since is derivable from, but is not derivable from). Intuitively, rule 3 is more concise than rule 2.

Let be the set of implicit precedence rules automatically synthesized from the defeasible rules declared in.Clearly, is an irreflexive relation (since the specificity criterion is defined only for conflicting rules), is an asymmetric relation (since, if, the antecedent of is defeasibly derived from the antecedent of, but not vice-versa), and is a transitive relation, with respect to conflicting rules (since, if, and, then and the antecedent of is defeasibly derivable from the antecedent of, but not vice-versa).Therefore, is an implicit precedence relation over defeasible rules declared in. For example, considering, we have:

§

The synthesis of an implicit preference relation is based only on the syntax of the defeasible rules declared in a knowledge base and, therefore, it has the advantage of being a criterion independent of the application domain. However, not all precedence rules can be defined in terms of specificity and, frequently, a knowledge base also contains explicit precedence rules defined by a domain expert. In this case, a mixed preference relation (synthe-

sized by combining explicit and implicit preference relations) may be used. Notice, however, that is not necessarily a strict partial order over (since explicit and implicit precedence relations can disagree about the relative precedence of two defeasible rules). For example, for the knowledge base, is not a strict partial order over, as can be easily verified:

§

§

To solve this problem, we propose an algorithm that combines explicit and implicit preference relations, by giving preference to explicit precedence rules. This algorithm starts with. Then, while is a cyclic relation, it finds the set of the weakest edges in a shortest cycle in, and defines. Given a cycle, the set of weakest edges in is. When the algorithm stops, is an acyclic relation and. Therefore, the transitive closure of, denoted by, is a strict partial order over and is a mixed precedence relation over defeasible rules declared in.The general idea of this process is depicted in Figure 4, considering an arbitrary situation involving eight labels. In this figure, explicit and implicit preference rules are represented by plain and dotted lines, respectively, and the precedence rules resulting from the transitive closure of the acyclic relation are represented by dashed lines. Particularly, for, we have.

3. The Dialectical Proof Procedure

As discussed in Section 2.2, arguments for and against a conclusion can be extracted from defeasible derivation trees. Arguments are similar to proofs but, since they can be defeated by stronger counterarguments, their con- clusions cannot be warranted under all circumstances. In this section, we present the fundamentals of the dialectical proof procedure on which our system is based. Given a knowledge base, this proof procedure can

(a) (b) (c)

Figure 4. Mixed precedence relation synthesis.. (a) Weakest edges in cycles in; (b) Corresponding acyclic relation; (c) Mixed precedence relation.

decide whether a conclusion can be coherently inferred from, by analyzing its pros and cons. The dialectical proof procedure is a kind persuasion dialogue [12] , based on two main components: a communication language and a protocol. These components are explained in the next subsections.

3.1. The Communication Language

To communicate its viewpoint about an issue, an agent must use a locution. The communication language specifies the locutions that the agents can utter in a conversation [13] . In this work, we adopt the following locutions, where is a knowledge base and is a literal:

§ , to claim that is a coherent conclusion from.

§ , to ask for reasons to believe that is a coherent conclusion from.

§ , to argue that is a reason to believe that is a coherent conclusion from.

§ , to agree that is a coherent conclusion from.

§ , to retract the claim about being a coherent conclusion from.

The dialectical proof procedure is modeled as a dialogue between agents and. A speech act is a pair formed by an agent and a locution. A dialogue starts with a speech act. The role of is to utter locutions defending the claim that is a coherent conclusion from, and the role of is to utter locutions raising doubt about the truth of that claim. The attitude of is credulous, while the attitude of is skeptical.

3.2. The Protocol

A dialogue is a finite sequence of speech acts. The record of all speech acts uttered by the agents, since the beginning of a dialogue until a specific moment, is a narrative. A protocol specifies, for each narrative, the next legal speech act. A legal dialogue is a dialogue consisting only of legal speech acts, according to the protocol.

The protocol used in this work is succinctly described in Table 1. In this table, speech act is the last utterance in the current narrative, and and are agents with adversary roles. For each speech act, this protocol specifies a legal reply, which can be an attacking or a surrendering reply. The protocol enforces that each reply must be coherent with the all previous locutions uttered by the agents, according to the current narrative.

The turn taking policy is implicitly defined by the reply structure imposed by the protocol (also specified in Table 1). An agent can give more than one reply to a speech act, repeated locutions are not allowed, and tentative replies must obey the order in which they are defined in Table 1.

During a dialogue, a dialectical tree with all relevant pros and cons for the initial claim is recursively built. The dialogue terminates when no legal reply in the current narrative is possible. A speech act is a winner if all its replies are losers; otherwise, if it has at least a winner reply, it is a loser. By definition, speech acts with the locutions and are losers. When a reply is a loser, the agent can backtrack and try another reply. At the end, the initial claim, about being a coherent conclusion from, is true if is a winner.

For example, consider the following knowledge base:

Table 1. Protocol: speech acts and reply structure..

where, , , and stand for “bird”, “fly”, “chicken”, and “scared”, respectively. Figure 5 shows a dialectical tree warranting that is a coherent conclusion from. Winners and losers are marked with and, respectively.

As said before, the agents play different roles in a dialogue: while defends the claim that is a coherent conclusion from, tries to raise doubts about that claim. Notice, however, that does not defend the opposite claim (i.e., that the complement of is a coherent conclusion from). Therefore, to win a dispute, must defeat the rules used by; whereas, to win a dispute, can defeat or block the rules used by. Moreover, when wins a dispute, is accepted (and, consequently, the complement of is rejected); on the other hand, when wins a dispute, is rejected (and there is no warranty that the complement of is accepted). Indeed, this proof procedure adheres to the open-world assumption [14] , according to which the value of a literal can be unknown. For example, both and are rejected as coherent

conclusions from, since the rules 1 and 2 block each other (notice that can agree with and because it is a credulous agent) (Figure 6).

4. The Dialogue System Prototype

A prototype1 of the proposed dialogue system was implemented in Prolog [15] . It runs in interpreted mode, and its commands are executed as standard Prolog queries. The main commands offered by this prototype are described in Table 2. By default, the system uses a mixed precedence relation and runs in verbose mode.

In the knowledge representation language used in the prototype, the symbols, , , and are replaced by the operators not, and, then, and precedes, respectively, the literal is replaced by the keyword true, and defeasible rules can contain free variables. For instance, Figure 7 (left, top) shows a knowledge base coded in this new representation language and saved in a file named kb.pl.

Figure 5. Dialectical tree warranting that is a coherent conclusion from.

Figure 6. The open-world assumption..

Figure 7. Dialogue System Prototype: knowledge base representation, precedence relations and query’s output..

Table 2. Main commands offered by the dialogue system prototype..

The command implemented by the predicate precedence_relations/1 shows the three kinds of precedence relations synthesized from a specific knowledge base. For instance, the precedence relations for the knowledge base kb.pl are shown in Figure 7 (left, bottom).

The command implemented by the predicate #/2 allows the user asking whether a literal is a coherent conclusion from a knowledge base. Only ground literals are allowed in queries and, at each query, each defeasible rule with variables is automatically replaced by one of its ground instances, according to the literal used in the query. For instance, the result of the query kb # fly (tina) is shown in Figure 7 (right).

The implemented prototype was tested with a series of benchmarking examples found in the literature and intuitively coherent results were obtained for all of them.

As future steps, we plan to study the formal properties of the dialogue system prototype, with respect to well known semantics for argumentation systems [5] , as well as to develop a graphical interface to show the dialectical tree structure and the relations between its arguments and counterarguments.

5. Conclusions

The ability of dealing with inconsistent knowledge bases is relevant for many practical applications. As it is well known, in such applications, inconsistency arises mainly due to the existence of rules with exceptions. Thus, one way of coping with inconsistency is to give precedence to exceptions. Based on this idea, this paper proposes a dialogue system for coherent reasoning with inconsistent knowledge bases, which resolves conflicts among defeasible rules by using precedence relations of three different kinds.

More specifically, this paper 1) shows how explicit and implicit precedence relations can be automatically synthesized from an inconsistent knowledge base and also how they can be combined to synthesize a mixed precedence relation (where explicit precedence rules can override conflicting implicit precedence rules); 2) presents a dialectical proof procedure that can be used to decide whether a specific conclusion can, or cannot, be coherently inferred from an inconsistent knowledge base; 3) implements a prototype system for coherent reasoning with inconsistent knowledge bases.

Future extensions of this work are the study of the formal properties of the proposed system and the development of a graphical interface for it.

Acknowledgements

This research (project number 800476/2014-0) is supported by CNPq (Brazilian National Counsel of Technological and Scientific Development), under grant numbers 305484/2012-5 and 102869/2015-4.

Cite this paper

Silvio do LagoPereira,Luiz Felipe Zarco dosSantos,Lucio Nunes deLira, (2015) A Dialogue System for Coherent Reasoning with Inconsistent Knowledge Bases. Journal of Computer and Communications,03,11-19. doi: 10.4236/jcc.2015.38002

References

  1. 1. Carnielli, W.A. and Marcos, J. (2001) Ex Contradictione Non Sequitur Quodlibet. Proceedings of the II Annual Conference on Reasoning and Logic, Bucharest, July 2001, 89-109.
    http://wslc.math.ist.utl.pt/ftp/pub/MarcosJ/01-CM-ECNSQL.pdf

  2. 2. Walton, D. (2006) Fundamentals of Critical Argumentation. Cambridge University Press, Cambridge.

  3. 3. Potyka, N. and Thimm, M. (2014) Consolidation of Probabilistic Knowledge Bases by Inconsistency Minimization. Proceedings of the 21st European Conference on Artificial Intelligence, Prague, 27 May 2014, 729-734.http://www.mthimm.de/pub/2014/Potyka_2014.pdf

  4. 4. Efstathiou, V. (2010) Algorithms for Computational Argumentation in Artificial Intelligence. Ph.D. Thesis, University College London, London. http://discovery.ucl.ac.uk/1301992/1/1301992.pdf

  5. 5. Dung, P.M. (1995) On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and N-Person Games. Artificial Intelligence, 77, 321-357.
    http://www.sciencedirect.com/science/article/pii/000437029400041X

  6. 6. Modgil, S.J. and Prakken, H. (2014) The ASPIC+ Framework for Structured Argumentation: A Tutorial. Argument and Computation, 5, 31-62.
    http://www.cs.uu.nl/groups/IS/archive/henry/ASPICtutorial.pdf

  7. 7. Gorogiannis, N. and Hunter, A. (2011) Instantiating Abstract Argumentation with Classical Logic Arguments: Postulates and Properties. Artificial Intelligence, 175, 1479-1497.
    http://www0.cs.ucl.ac.uk/staff/a.hunter/papers/arglog.pdf
    http://dx.doi.org/10.1016/j.artint.2010.12.003

  8. 8. García, A.J. and Simari, G.R. (2004) Defeasible Logic Programming: An Argumentative Approach. Theory and Practice of Logic Programming, 4, 95-138.
    http://cs.uns.edu.ar/~ajg/papers/2004TPLPGarciaSimari.pdf
    http://dx.doi.org/10.1017/S1471068403001674

  9. 9. Verheij, B. (1999) Logic, Context and Valid Inference or: Can There Be a Logic of Law? In: Jaap van den Herik, H., et al., Eds., Legal Knowledge Based Systems, GNI, Nijmegen, 109-121.
    http://www.ai.rug.nl/~verheij/publications/pdf/jurix99.pdf

  10. 10. Kowalski, R. (1974) Predicate Logic as a Programming Language. Information Processing, North Holland Publishing Co., Amsterdam, 569-574. http://www.doc.ic.ac.uk/~rak/papers/IFIP%2074.pdf

  11. 11. Stolzenburg, F., et al. (2002) Computing Generalized Specificity. Journal of Applied Non-Classical Logics, 12, 1-27.http://link.springer.com/chapter/10.1007/978-94-017-1737-3_4

  12. 12. Besnard, P. and Hunter, A. (2008) Elements of Argumentation. MIT Press, Cambridge.
    https://mitpress.mit.edu/sites/default/files/titles/content/9780262026437_sch_0001.pdf

  13. 13. Prakken, H. (2006) Formal Systems for Persuasion Dialogue. Knowledge Engineering Review, 21, 163-188.http://www.cs.uu.nl/groups/IS/archive/henry/dgreview.pdf
    http://dx.doi.org/10.1017/S0269888906000865

  14. 14. Russell, S. and Norvig, P. (2010) Artificial Intelligence: A Modern Approach. 3rd Edition, Prentice Hall, Upper Saddle River.

  15. 15. Bratko, I. (2011) Prolog Programming for Artificial Intelligence. 4th Edition, Pearson, Canada.

NOTES

1Available at www.ime.usp.br/~slago/dsp.zip.