Intelligent Information Management
Vol.5 No.3(2013), Article ID:31735,24 pages DOI:10.4236/iim.2013.53007

Business Intelligence Expert System on SOX Compliance over the Purchase Orders Creation Process

Jesus Angel Fernandez Canelas1, Quintin Martin Martin2, Juan Manuel Corchado Rodriguez3

1Global Procurement, Nokia Siemens Networks, Madrid, Spain

2Statistics Department, University of Salamanca, Salamanca, Spain

3Computer Science Department, Universidad de Salamanca, Salamanca, Spain

Email: jefernan55@hotmail.com, corchado@usal.es, qmm@usal.es

Copyright © 2013 Jesus Angel Fernandez Canelas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received January 24, 2013; revised February 20, 2013; accepted April 1, 2013

Keywords: Multiagent Systems (MAS); Decision Support Systems (DSS); Sarbanes-Oxley Act (SOX); Argumentation; Artificial Intelligence; Business Intelligence (BI); Expert Systems (ES)

ABSTRACT

The objective of this work is to define a decision support system over SOX (Sarbanes-Oxley Act) compatibility and quality of the Purchase Orders Creation Process based on Artificial Intelligence and Theory of Argumentation knowledge and techniques. This proposed model directly contributes to both scientific research artificial intelligent area and business practices. From business perspective it empowers the use of artificial intelligent models and techniques to drive decision making processes over financial statements. From scientific and research area the impact is based on the combination of 1) an Information Seeking Dialog Protocol in which a requestor agent inquires the business case; 2) a Facts Valuation based Protocol in which the previously gathered facts are analyzed; 3) the already incorporated initial knowledge of a human expert via initial beliefs; 4) the Intra-Agent Decision Making Protocol based on deductive argumentation; and 5) the semi automated Dynamic Knowledge Learning Protocol. Last but not least the suggested way of integration of this proposed model in a higher level multiagent intelligent system in which a Joint Deliberative Dialog Protocol and an Inter-Agent Decision Deductive Argumentation Making Protocol are described.

1. Introduction

On 16th October 2001, Enron, US multinational company dedicated to gas and electricity market publishes its financial quarter results with 600 US millions dollars of losses, its stocks decrease from 90 dollars to 30 cents. This is the beginning of its bankruptcy, firing thousands of employees, significant loses on its shareholders, financial markets are collapsed by contagion and social alarm shoots up.

Only two months before, on August, Enron reached its historical maximum in the stock exchange market with 90 dollars per share, showing a healthy financial situation.

The social alarm had jumped, the financial irregular practices begin to be visible, and after Enron’s collapse, it is followed by companies like Global Crossing, WorldCom, Tyco or Adelphia between others. The principal stock markets of the world suffered big falls of price and the lack of credibility and confidence covered all financial markets.

In July, 2002, the government of the United States approved the Law SOX (Sarbanes-Oxley Act) in response to all these financial scandals, with the last aim to increase the governmental control on the economic and financial operations of the companies, to control the audits of his accounts, to protect the investors, to avoid massive dismissals and to try to return the calm to the financial markets.

This Law turns into a norm of obliged fulfillment into the United States, but at the same time, turns into a facto standard in the rest of the world due to the high degree of globalization and owed also mainly to that the companies with headquarters in the United States or that operate on its stock markets, consolidate its results worldwide on the basis of the results of his subsidiaries in the rest of the world.

The above mentioned forces that the subsidiaries of these multinationals in the rest of countries in spite of being out of the United States, have to fulfill also with the above mentioned Law, not to harm the parent company with regards to the fulfillment of the Law in the United States.

1.1. Problem Description

The problem here described is a decision problem with the following characteristics:

1) It is a decision problem: it is needed to take a decision over compatibility or not of a specific business case focused on the Purchase Orders Creation Process.

2) The decision should be based on evidences: those evidences will be the basis of the decision and will be the evidence in front of auditors and control government bodies.

3) It is needed an initial expert knowledge: this Law tells what it is needed to do but not how to do it. It is fundamental that this initial expert knowledge will come from a human expert with enough experience in this kind of cases.

4) The model should be able to learn dynamically from court decisions, government control bodies or other human experts to let the initial knowledge evolves and grows far beyond its initial state.

1.2. Special Contribution of This Model

Existing models based on Multiagent Systems and Theory of Argumentation show the following limitations:

1) They have been designed to solve other types of problems like medicine, legal, negotiations, ecommerce or learning ones.

2) They don’t have initial expert knowledge over SOX compatibility over the Purchase Orders Creation Process.

3) They don’t have any method to incorporate dynamic decisions coming from the courts or from the government control bodies to add this knowledge to the initial one.

This paper constitutes a novel approach to this kind of problems due to the fact that has an optimized structure to solve this kind of specific problems, adds an initial knowledge base coming from a real human expert in this matter, provides a learning method to dynamically incorporates court decisions and control government bodies decisions, letting the system evolves far beyond its initial state to improve its efficiency based on its accumulated experience.

1.3. Artificial Intelligence, Theory of Argumentation and SOX Regulation

In the present work a method to support decisions on the fulfillment of the SOX Law is designed, using both technologies of Artificial Intelligence and Theory of Argumentation.

More in detail, the objective of the presented work is on one side, to design a decision support intelligent expert system based on technologies of argumentative negotiation to check if certain economic and financial operations of the companies are compatible or not with the above mentioned Law, helping companies to take corrective actions before it will be too much late and giving support to the financial auditors in their decisions on if the economic and financial operations of a certain company are compliant or not with the SOX legislation, providing them a structured method based on recognized technologies of Artificial Intelligence, Negotiation Techniques and Argumentation Theory. On the other side, as secondary objective, this system will provide a measure of the quality of the analyzed business case according to previously defined criteria.

This work is based on two fundamental areas:

1) Theory of Argumentation in Multiagent Systems, inside Artificial Intelligence area.

2) Legal Normative of Financial SOX Audit and its relationship with Computation and Intelligent Systems.

With regards to the first point, the basics of the Theory of Argumentation are analyzed inside Artificial Intelligence area, at the same time that the basic principles of Multiagent Systems based on Theory of Argumentation are revised, too.

With regards to the second point, related to the financial SOX regulation, it is described the key points of this regulation as well as its relationship with Information Technologies and Artificial Intelligence. After this analysis, several recent scientific articles in this matter are revised too.

Nowadays, Artificial Intelligence area is really extensive due to the topics it covers, the quantity and quality of scientific studies and its connections with other areas of knowledge, as well as the areas in which it can have application, like Medicine, Engineering, Industrial Processes or Finance. In relation to the work here exposed, we are going to focus in one subarea of the Artificial Intelligence, Multiagent Systems, and its relationship with Theory of Argumentation On one hand, Artificial Intelligence tries to go closer to the human reasoning models, well for simulation purposes, or to be able to apply these reasoning models to different areas of the science with the objective of getting that certain systems or scientific and technological processes, will have an artificial reasoning behavior. On the other hand, the Theory of Argumentation with a wide history, attempts to model and to characterize from a theoretical point of view, the different patterns of the human reasoning, being based on its two fundamental bases that are Classic Logic and Mathematics.

One of the most important areas inside Artificial Intelligence, which in the last years has experienced an important scientific advance is the area of Multiagent Systems. This area provides the fundamental basis to model complex systems where all its elements interact among each other with the objective to reach individual or common objectives and where the interaction is critical to reach whatever objective. Inside the world of Computation, Information Technologies and Artificial Intelligence, the area of Multiagent Systems gets special relevance when we connect it with the Theory of Argumentation. It is in this moment when we can provide to complex Multiagent System an internal logic to let them to behave using simulated reasoning processes with solid bases on Formal Logic and mathematical models.

Here there are three typical examples of the use of Theory of Argumentation in different fields of the Artificial Intelligence:

1) Non monotonic reasoning. Here the Theory of Argumentation is used to identify, negotiate and solve inconsistencies inside the reasoning, and to generalize reasonings.

2) Reasoning and taking of decisions under uncertainty. Here the Theory of Argumentation is useful to make inferences and to combine the evidence concept at the same time.

3) Multiagent Systems. In this area, the Theory of Argumentation is especially useful to simulate reasoned interaction among the different agents of a certain system as already commented.

From a theoretical point of view, the argumentation can be defined as the interaction process among different arguments to reach a conclusion. This conclusion can be a statement, an action proposal, a preference, etc.

With regards to the SOX Law, it is formed by eleven titles, and each title cover different aspects of the Law:

Articles 302, 404 and 906 of the 67 articles reflected in the SOX Law, are the most important ones because they make responsible to the management and especially to the General Director and the Financial Director of all the financial reports presented by the company.

With regards to the article 302, Corporate Responsibility and Financial Reports, the effective legislation in US, forces to the companies quarterly and annually to publish their financial results.

The article 302 of the SOX Law forces the General Director and the Financial Director to certify personally inside the periodical published results report the following points:

1) Certification of Revision of the Report: Personal certification of the General Director and the Financial Director that they have reviewed the report.

2) Certification of Truthfulness: Personal certification of the General Director and the Financial Director that the report does not contain any material untrue statements or material omission of be considered misleading.

3) Certification of Financial Exact and Truthful Data: Personal certification of the General Director and the Financial Director that financial statements and related information fairly present the financial condition and the result in all material respects.

4) Certification of Internal Controls: Personal certification of the General Director and the Financial Director that they are responsible for internal controls and have evaluated these internal controls within the previous ninety days and have reported on their findings.

5) Certification over Publication of Deviations and Frauds: Personal certification of the General Director and the Financial Director that they have informed the auditor company of any deficiency detected in the design of the internal controls and any detected fraud.

6) Certification of Significant Changes in the Internal Controls: Personal certification of the General Director and the Financial Director about any change in the design of the internal controls and about whatever corrective action to repair whatever detected deficiency over the internal controls.

With regards to the article 404, Revision of the Internal Controls by Company Management, this article forces to include in the annual report where the results of the company are published, a report about the internal controls in effect inside this company that contains the following points:

1) Management Responsibility over the Internal Controls: The report over the internal controls included in the annual results report has to specify a sentence which states that the management of the company is responsible to define and maintain the needed internal controls for a right financial report process.

2) Verification and Report from the Management of the Company about the effectiveness of the internal controls: The report over the internal controls included in the annual results report has to inform on the results of the revision carried out by the management of the company about the effectiveness of the internal controls in effect inside this company.

3) Revision of the Previous Report for an Authorized Auditor Company: The authorized auditor company in charge of the audit of the financial results presented by the company should audit as well the report coming from the previous point about the effectiveness of the internal controls.

With regards to the article 906, Corporate Responsibility on the Financial Reports, this article is redundant with the article 302 previously explained and reinforces the General Director’s and Financial Director’s direct responsibility on the financial periodical results of the company.

This article clearly states the sanctions for General Director and Financial Director just in case of inadequate reports with errors of reports which don’t reflect faithfully the financial situation of the company.

The problem described before, is a decision making problem with the following main characteristics:

1) Decision making problem: at the end, it is needed to take a decision about the compatibility or not of the specific business case with this law.

2) Decision based on evidences: those evidences will be the support of the decision and will be the probe towards auditors and control organisms.

3) Needed initial expert non standardized knowledge: this Law states what should be done but not how should be done. This means that the source of the initial knowledge should be a human expert with enough experience in driving business cases inside a SOX compliant state.

4) Been able to learn from present court resolutions to be able to use this extra knowledge in the future: some kind of learning method is needed to let the initial knowledge evolve and growth far beyond its initial state.

This Law affects whatever economical or financial major process in a company, like for example purchasing cycle, financial cycle or sales cycle. Those major cycles are divided in different processes. For example, purchasing cycle can be divided in suppliers’ selection process, suppliers contracting process, approval of purchase orders, and so on. This kind of structure can be very well modeled with a Multiagent System (MAS) structure. Taking in mind as well that the final decision should be based on evidences, the Argumentation in combination with MAS is an optimal approach to model this kind of problems.

Present existing models using this kind of techniques like MAS and Argumentation show limitations like:

1) They are being designed mainly to solve other type of problems like medical, legal, negotiations, trading, education or business (COSSAC, CARNEADES, AAC, TAC, INTERLOC, ARGUGRID).

2) They don’t have an initial expert based of SOX compliant knowledge.

3) They don’t have a learning method able to incorporate court resolutions to the initial knowledge base.

The model here presented is a novel approach to solve this kind of problems due to the fact that it has an optimized structure to solve this specific problem, incorporates an initial expert knowledge base coming from the experience of a human expert and incorporates an specific learning protocol to add present court resolutions to the initial knowledge base, letting the system to evolve far beyond its initial knowledge state, letting the system to increase its efficiency as the times goes on based on its accumulated experience.

This article is structure as follows: Section 2 describes the State of the Art of both relevant areas in which this article is based on and states the starting point of this work. Section 3 describes the proposed model specifying the key elements as well as the main protocols of the system. Section 4 presents a possible integration of the previously proposed system with a higher level multiagent system. Sections 5 and 6 will provide a clear real example of the use or our proposed model over a real business case. Finally, Section 7 will remark the conclusions here obtained.

2. State of the Art

2.1. Theory of Argumentation in Artificial Intelligence

The Theory of Argumentation has been broadly studied and investigated throughout the years inside the areas of Philosophy and Mathematical Logic.

Nowadays Artificial Intelligence is an important field of application of the Theory of Argumentation and we can find traditional studies of this practical relationship in subjects like Decision Making, Logical Programming or Tentative Knowledge [1] Fox, Krause & Ambler, 1992; [2] Krause et al., 1995; [3] Dimpoulos, Nebel & Toni, 1999; [4] Dung, 1995.

There are as well more recent examples which show this relationship between Theory of Argumentation and Artificial Intelligence like: [5] Besnard & Hunter, 2008; [6] Bench-capon & Dune, 2007; [7] Kraus, Sycara & Evenchik, 1998; [8] IEEE Intelligent Systems on Argumentation 2007; [9] Rahwan & Simari, 2009.

There are as well some other important topics under investigation nowadays which show the wide range of possibilities of this relationship like for example: 1) Computational models of argumentation, 2) Argument based decisions making, 3) Deliberation based on argumentation, 4) Persuasion based on argumentation, 5) Search of information for inquiring based on argumentation, 6) Negotiation and resolution of conflicts based on argumentation, 7) Analysis of risks based on argumentation 8) Legal reasoning based on argumentation, 9) Electronic democracy based on argumentation, 10) Cooperation, coordination, and team building based on argumenttation, 11) Argumentation and game theory in Multiagent Systems, 12) Argumentation Human-Agent, 13) Modeling of preferences in argumentation, 14) Strategic behavior in argument based dialogues, 15) Deception, truthfulness and reputation in the interaction based on argumentation, 16) Computational complexity of the dialogues based on argumentation, 17) Properties of dialogues based on argumentation (success, termination, etc), 18) Hybrid models of argumentation and 19) Implementation of Multiagent Systems based on argumenttation.

There are two difference approaches about automatic argumentation: 1) Abstract Argumentation and 2) Deductive Argumentation. The Abstract Argumentation is focused in the coexistence of arguments without getting into detail of its meaning. It only takes care about the attack relationships among arguments and their accept ability or not and in which grade. One of the most important studies so far and whose concepts are still valid nowadays are the Abstract Argumentation Systems of [4] Dung (1995). [10] Boella, Hulstijn & Torre, 2005 proposed an extension of Dung’s model in which the arguments are dynamic elements not predefined in advance.

Models of Deductive Argumentation are another alternative to the Automatic Argumentation. They are deductive models based on formulas and based on Classical Logic. The arguments, opposite to the Abstract Argumentation, are complex elements that can be subdivided in elements or arguments of more simple structure. Deductive Argumentation is able to manage the complexity of the internal structure of the arguments. The key concept inside this type of argumentation is the logical deduction. The fundamental objective of whatever model of deductive argumentation is to reach a conclusion based on a support formed by arguments and reasoning of deductive logic. In the literature we find a recent study carried out by [5] Besnard and Hunter (2008) which is focused on Deductive Argumentation inside the area of Artificial Intelligence.

Basically, Deductive Argumentation consists on manage non evident information (information that is not known if it is or not acceptable or truthful) and should generate arguments to support or against this information so that after a process of deductive reasoning, the conclusion about its truthfulness or admissibility is reached.

There are two fundamental reasons why Theory of Argumentation gets special relevance in Multiagent Systems: 1) On one hand, Theory of Argumentation finds in Multiagent Systems a wide field of practical application, allowing Multiagent Systems to get benefits from an entire formal solid theory and with a wide history and where formal existent models in Theory of Argumentation offer a wide range of possibilities in the design of this kind of systems and 2) On the other hand, Multiagent Systems find in Theory of Argumentation a solid and formal base which allows to provide those systems with a syntactic and semantic structure which helps to the design of these kinds of systems and to reach their own objectives.

Multiagent Systems can use Theory of Argumentation and their formal models for internal reasoning for their individual agents or in share reasoning among all the agents of the system. The shared reasoning among the agents of the system consists in that agents dialog among each other with the final objective of reaching the common shared previously defined objective. The communication among the agents is driven for specific dialogue protocols and is a key important point to reach the final objective.

It is very important to remark in this point that the communication among the agents which conforms the Multiagent System is a key element to reach the objecttives of this system. This communication will be based on different types of dialogues. And it is in this communication and in these dialogues where Multiagent Systems are closely related to the Theory of Argumentation, because this last lets to give to these dialogues a formal structure based on preexisting argumentation models.

Basically, the success of a Multiagent System consists on achieving its objective for which was designed. The grade of success in getting this objective will depend in great measure on the fruitful communication among its agents. And thanks to the Theory of Argumentation, we can provide a solid formal base to this communication and their corresponding dialogues.

The design of Multiagent Systems, as well as the investigation of new formal models of argumentation, are two areas in continuous growth and whose advances impact very positively in getting more efficient Multiagent Systems at the time to reach the final objective.

One of the most influential works in the communication area of Multiagent Systems inside the Artificial Intelligence using argumentation technics has been the work carried out by Walton and Krabe in which it is described the basic concepts of communication dialogues and reasoning processes [11] Walton & Krabe, 1995. As state by Walton and Krabe those are the main dialogue types: 1) Dialogues based on information seeking, 2) Dialogues based on questions, 3) Dialogues based on persuasion, 4) Dialogues based on negotiation, 5) Dialogues based on deliberation, 6) Dialogues based on dialectical battles, 7) Dialogues based on commands, 8) Dialogues based on discovery of alternatives, 9) Non cooperative dialogues and 10) Educational dialogues.

[12] Cogan, Parsons and McBurney (2005) proposed a new type of dialogue between agents: the verification dialogues. [13] Amgoud and Hameurlain (2006) proposed a model to select the right movement in a dialogue between agents in terms of type of message and content to be transmitted. [14] Tang and Parsons (2005) designed a specific deliberation dialogue model in which the global action plan of the full Multiagent System is conformed by the union of the subplans of each agent after a deliberation process with the rest of the agents.

There are as well some other authors [15] Amgoud, Maudet & Parsons, 2000; [16] Reed, 1998 who propose modifications to the previously enumerated dialogues.  In all these dialogue types, messages are exchanged among the involved agents, according to several aspects like the dialogue type, the previous knowledge of the agents, the reasoning protocol or the argumentation technique. There are as well some other authors [17] Parsons & Wooldrige, 2003; [18] Sklar & Parsons, 2004 who have identified and formally defined the different types of messages that can be used in different dialogues, for example: 1) Messages of Assertion, 2) Messages of Acceptance, 3) Question Messages, 4) Challenged Messages, 5) Testing Messages and 6) Answer Messages. Those messages are defined in terms of a specific semantic implemented by preconditions and postconditions.

The relationship between Theory of Argumentation and Multiagent Systems is widely supported nowadays by the present scientific research community as we can see in the following examples: 1) In 2009 [19] Belsiotis, Rovatsos & Rahwan (2009) designed a dialogue model based on reasoning, deliberation and tentative knowledge to use Argumentation Theory over calculus of situation plans. 2) [20] Devereux and Reed (2009) proposed an specific model for strategic argumentation in rigorous persuasion dialogues in which it is push the concept of attacking not only the initial knowledge of the agents, but as well this missing knowledge that does not belong to the agent. 3) [21] Matt, Toni & Vaccari (2009) designed a model based on dominant decisions on argumentative agents. The idea behind this work is that all possible decisions provided by each agent will be value based on previously indicated preferences looking for maximize the final benefit. This mechanism is as well a procedure to autoexplain the winner decision. 4) [22] Wardeh, Bech-Capon & Coenen (2009) proposed a multi-party argument model based on the past experience of the agents to classify a specific case. This work promotes the idea that each agent uses data mining techniques and associative rules to solve the case based on its own experience. 5) [23] Morge and Mancarella (2009) proposed an argumentation model based on assumptions to drive the argumentation process between agents with the objective to reach the optimal agreement between all the agents. 6) [24] Thimm (2009) proposed an argumentation model for multiagent systems based on Defeasible Logic Programming in which each agent generates support and opposite arguments to answer the objective question. At the end the most feasible argument is selected to answer the initial question.

2.2. Intelligent Models Applied to SOX

The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire journals, and not as an independent document. Please do not revise any of the current designations.

Here it is shown how Information Technologies through Artificial Intelligence help and support Decision Making related to the SOX mandates this Law establishes. Some of these studies are previous to the SOX Law and they showed the existing concern about if companies publicshed truthful financial reports and suggest several intelligent systems to support financial auditors in their decision making processes to state if those reports were truthful or not.

[25] Changchit, Holsapple & Madden (1999) before the SOX Law, remarked the concern about truthful financial reports of companies and remarked the positive impact of using intelligent system to identify problems on the internal controls of those companies. It constitutes a good example of interaction between Artificial Intelligence and Financial Area. [26] Meservy (1986) designed an expert system to audit the set of internal controls of the companies. This work is as well before the publication of the SOX Law.

[27] O’Callaghan (1994) suggested an Artificial Intelligence application based on neural networks with backpropagation to simulate the revision of fixed actives of a company using a system of internal controls based on the COSO (Committee of Sponsoring Organizations of the Treadway Commission) model. In a recent work done by [28] Liu, Tang & Song (2009), it is presented an evaluation model of internal controls based on fuzzy logic, pattern classification and data mining with the objective to check the effectiveness of the internal controls of the companies.

[29] Kumar & Liu (2008) designed a model that uses techniques of patterns recognition to audit the internal controls and processes of the company. [30] Changchit & Holsapple (2004) designed an expert system to evaluate the internal controls by the management of the company. The final objective is to evaluate the effectiveness of the structure of the internal controls of the company.

[31] Korvin, Shipley & Omer, (2004) published an study about the internal possible controls that can be defined inside an computer system focused on the financial management of a company and to value using fuzzy sets logic, the risks over certain specific threats. [32] Deshmukh & Talluru (1998) designed another model to value risks on specific threats in the internal controls of the company. This work is based on fuzzy sets theory. This work will let the management of the company to decide if their internal controls are or not effective and to take appropriate actions.

In Reference [33] Fanning & Cogger (1998) proposed a Fraud Detection Model based on Neural Networks using as input the data published by the company in its periodical results. It is another example in which Artificial Intelligence provides its tools to the Financial Area. Fanning and Cogger based their study on other two previous studies which applied techniques of neural networks to Economy and Finances; [34] Coakley, Gammill & Brown, 1995; [35] Fanning & Cogger, 1994 and they combined them with statistical traditional techniques to create their model of prediction of financial fraudulent reports.

In Reference [36] Welch, Reeves & Welch, (1998) proposed a specific model to search financial fraud and support audit decisions based on the use of genetic algorithms. This work is focused on fraud research on suppliers of the government. This model looked for specific fraud patterns to identify evidences of these frauds. In Reference [37] Srivastava, Dutta & Johns, (1998) proposed an specific model to valuate and plan audits using functions of belief based on intelligent expert systems.

In Reference [38] Sarkar, Sriram & Joykutty, (1998) developed an expert system based on beliefs networks and using probabilistic models on the inference process.

It is needed to remark that the concern about truthful and clear financial reports existed before the SOX Law, but this Law states a clear legal framework with very well defined identification of responsibilities.

With the SOX Law in effect, companies are forced to establish certain internal controls inside key processes of the company to give visibility and transparency to all the operations carried out. Due to the existing high technological level nowadays, and due to the big managed volumes of information, it makes mandatory and needed the implementation of internal controls in the computer systems used by these companies.

For this reason, it is necessary to implement internal controls inside the information systems used by the areas of Purchasing, Sales and Finance and Control. These internal controls have been transformed into new requirements or functionalities that whatever information system should have to be compatible with the effective SOX Law.

The main objective of these internal controls is to monitor transactions or operations of purchases, sales or financial operations with the main objective that whatever operation will be visible to the management of the company and it will be made according to the rules and established processes. The General Director and the Financial Director are the responsible persons in charge of the certification in front of the control organisms about the truthfulness and transparency of all the operations, and that they have not been carried out fraudulent hidden operations with the corresponding negative impact for the shareholders of the company.

Nowadays and in relation to the model here design, after revising different international bibliographical sources and up to the best of our knowledge it isn’t found any publication that uses Multiagent Systems and Theory of Argumentation in the implementation of internal controls SOX with the objective of identify if a Purchase Orders Creation Process of an specific business case is or not compatible with the SOX Law supporting auditors and companies to take their appropriate decisions about this SOX compliance.

3. Proposed Model

The objective of the present work is to design an argumentative SOX compliant decision support system over the Purchase Orders Creation Process of the financial products and services Purchasing Cycle using technologies of both Artificial Intelligence and Argumentative Negotiation to support companies to identify non SOX compliant situations before it will be too much late and to support financial auditor to decide if the economic and financial periodical results published by those companies are or not compliant with the SOX Law.

As well it is explained how this system can be incurporated into a higher level multiagent intelligent expert system to cover the full financial purchasing cycle.

In general, in whatever company, there are six different key financial cycles: 1) Purchasing Cycle, 2) Inventory Cycle, 3) Sales Cycle, 4) Employees Payment Cycle, 5) Accounting Cycle, 6) Information Technologies Cycle (as support to other financial cycles), and 7) Cycle of Services Outsourcing.

The economic and financial results published by a company will be compatible with SOX Law, if all economic and financial operations that belong to these results are as well SOX compliant. As well, all those economic and financial operations are SOX compliant if all the projects or business cases that compose those results are SOX compliant too. A specific business case will be SOX compliant if all the financial cycles that constitute it, are compatible with the SOX Law.

The key processes that compose a typical Purchasing Cycle are usually: 1) Suppliers’ Selection, 2) Suppliers’ Contracting, 3) Approval of Purchase Orders, 4) Creation of Purchase Orders, 5) Documentary Receipt of Orders, 6) Imports, 7) Check of Invoices, 8) Approval of Invoices without Purchase Order and 9) Suppliers’ Maintenance. The Purchasing Cycle of a certain business case will be compatible with SOX regulation, if all its processes are SOX compliant. This proposed model is focused on the Purchase Orders Creation Process of the Purchasing Cycle and its compatibility with the SOX regulation.

The decision support system here designed, is going to be implemented by an argumentative intelligent expert agent which has the objective to help companies and auditors to decide if the Purchase Orders Creation Process followed in the analyzed business case is or not compatible with the SOX Law and as well as second objective to provide a measure of the quality of that process carried out in the analyzed business case.

The agent has being designed with a specific structure optimized to reach the final objective of the system. Those are the elements that compose this structure:

1) Agent’s Objective.

2) Initial Beliefs or Base Knowledge of the Agent.

3) Information Seeking Dialog Protocol.

4) Facts Valuation Protocol based on Agent’s Beliefs 5) Agent’s Valuation Matrix over the Business Case Facts based on its Beliefs of Knowledge Base.

6) Intra-Agent Decision Making Protocol (Intra-Agent Reasoning Process on SOX Compatibility based on Deductive Argumentation. Conclusive Individual Phase of the Agent).

7) Dynamic Knowledge Learning Protocol.

3.1. Agent’s Objective The agent’s main objective is to verify if the Purchase Orders Creation Process of the business case that is being analyzed is or not compatible with the SOX legislation.

As secondary objective, it will provide a measure of the quality of that process carried out in the analyzed business case. For both objectives, it will be check if every belief on the initial beliefs base matches or not with a fact of the facts base of the business case, and in case of matching, how much (quantitative value of this matching).

3.2. Beliefs or Base Knowledge

In this section it is gathered the initial knowledge of the agent as a set of beliefs. It represents the knowledge the agent has on the specific analyzed process without taking in mind any other possible knowledge derived from the experience and from the learning. The above mentioned beliefs will be enumerated and their characteristics will be indicated.

1) Creation of Purchase Orders:

This is a key belief of the knowledge base of this agent. The existence or not of a fact of the analyzed business case that matches to this belief, will be a key point for SOX compatibility as well as for the final valuation of the quality of the Purchase Orders Creation Process.

This is a critical factor form SOX legislation point of view. SOX legislation looks always for the transparency in all business cases of the company. And as well SOX legislation expects from the company that all decisions made by them look for the main interest of the investors according to the Law.

This belief mainly refers to verify if in the analyzed business case, the purchase orders creation has been made according to the following guidelines: 1) the previous existence of the approval of that purchase order, 2) before making any work or receiving any goods, the purchase order has been created before, 3) pricing, terms and conditions indicated in the purchase order document are the ones reflected in the contract and 4) once the services or goods have been received, the person who acts in the name of the company receiving this, reflects in a written and signed document this reception for further  revision.

2) Monitoring of Purchase Orders:

This is a key belief of the knowledge base of this agent. The existence or not of a fact of the analyzed business case that matches to this belief, will be a key point for SOX compatibility as well as for the final valuation of the quality of the Purchase Orders Creation Process.

This is a critical factor form SOX legislation point of view. SOX legislation looks always for the transparency in all business cases of the company This belief mainly analyzes if there is a periodical revision of the purchase orders to assure that the purchase orders creation process is the right one and to assure that there is no purchasing without the specific prior purchase order.

3.3. Information Seeking Dialog Protocol

This protocol is designed to let the agent interrogates the analyzed business case looking for relevant information to be analyzed later on to determine on the basis of the initial knowledge of the agent, which one is the degree of quality of the followed process in that business case, as well as to value if the above mentioned process has complied with SOX regulation. The agent inquires the business case according to the beliefs it has in its initial knowledge, and for every question, the agent will gather from the business case an answer with the needed detailed information accordingly to every belief.

This protocol is designed taking in mind two ideas: 1) one of the most important elements of an agent is its initial knowledge formed by its beliefs and 2) a business case can be considered as a set of facts which constitute all the information about how things were done along the life of the above mentioned business case. The aim of this protocol is to capture for every belief of the agent, the correspondent fact of the facts base of the business case which corresponds with the above mentioned belief. Once captured, it will be necessary to see how much it is in line with the specific belief of the agent both from a quality point of view and from SOX compliant point of view.

Basically this protocol consists on the idea that the agent asks to the business case, “how did you do this?”, and the business case will answer to the agent with the “arguments” or “evidences” of how it did it. Evidences that later on will be analyzed by the agent. It is necessary to keep in mind that the agent has a clear idea of how it is necessary to do the things in every stage of the business case based on its initial knowledge, and that what the agent is looking, is to analyze if inside the business case, things were done as should be.

This Information Seeking Dialog Protocol constitutes a phase in which the agent individually explores the whole documentation of the analyzed business case with the objective to compile as much evidences as possible on how things were done. Those beliefs as already comented, constitute the initial knowledge or base knowledge of the agent and represent the fundamental characteristics of the process that the agent is analyzing.

The Purchase Orders Creation Agent analyzes the Purchase Orders Creation Process and in the above metioned process there is a series of key characteristics. These kinds of details are “beliefs” of the agent and more important, inside these beliefs, inside its agent’s initial knowledge, the agent has a clear idea of how things should be done.

When the agent analyzes the business case with this protocol, it compiles all the facts of the business case which match with its beliefs. It can happen that for a certain belief a fact does not exist in the facts base of the business case, denoting steps inside the business case that they should have done and has not been like that. With this protocol, the agent will take this under consideration for coming stages at the time to value the quality of the process and take the appropriate decision about SOX compatibility according to this situation.

The inspection of the agent over the business case will be realized across a mediating agent which will facilitate the communication between both. This mediating agent represents the person responsible for the business case in the company, and for each question of the agent who analyzes the case, can seek inside the business case documentation to analyze the above mentioned documentation and to provide a response to the formulated question.

Here (Figure 1) it is presented the protocol in which the agent inquires the analyzed business case with the objective to gather needed information about its beliefs. This collected information will allow valuing the initial beliefs from SOX compatibility point of view and from quality point of view.

Let’s see in next the next section how to value these collected facts.

3.4. Facts Valuation Protocol Based on Agent’s Beliefs

This protocol allows the agent to be able to value the facts previously gathered as evidences with the Information Seeking Dialog Protocol. The valuation of these evidences will be carried out based on two approaches: 1) quality of the process and 2) compatibility with SOX legislation. Two weight factors have been assigned to each belief respectively for quality and for SOX compatibility. The weight of quality will denote the relevance of that belief in the global valuation of quality of the whole analyzed process. The weight of SOX compatibility will only denote if this specific belief is relevant or not from SOX compliant point of view. Qualities’ weight will be used in a numeric way to calculate the final quality of the specific analyzed process. SOX compatibilities’ weight won’t be used in a numeric way, it will indicate if that belief is or not relevant for the compatibility with SOX legislation.

Regarding valuation of quality, there will be numeric values inside the range [−10, 10], where −10 will denote a penalization in the valuation of quality, and 10 will denote the maximum value of quality. Regarding valuation of SOX compatibility, the possible values will be logical boolean values: true (t) or false (f). True denotes

Figure 1. Information seeking dialog protocol.

that this belief matches a fact of the facts base of the analyzed business case and therefore the process analyzed by this agent, regarding that belief, is compatible with the SOX legislation. False value will mean the opposite.

This is an example (Table 1):

This agent has two key beliefs composing the initial base knowledge of the agent: 1) Creation of Purchase Orders and 2) Monitoring of Purchase Orders. This is the valuation protocol for each of those beliefs (see Tables 2 and 3 below):

1) Creation of Purchase Orders:

2) Monitoring of Purchase Orders:

3.5. Agent’s Valuation Matrix Over the Business Case Facts Based on Its Beliefs or Knowledge Based

In this section, it is showed in table format (Table 4) all valuations gathered by the previous Facts Valuation Protocol based on Agent’s Beliefs over each one of the facts of the analyzed business case.

It is needed to highlight, as indicated before, that SOX compatibility weights are indicators of if that belief is or not relevant from SOX compatibility point of view. In the case of being a relevant belief for SOX compatibility, it will be indicated with an unitary weight (1), and its

Table 1. Facts valuation protocol based on agent’s beliefs.

Table 2. Purchase orders creation valuation protocol.

Table 3. Purchase orders monitoring valuation protocol.

value according to the previous protocol, will be true (t) meaning that it is SOX_COMPLIANT or false (f) meaning NON_SOX_COMPLIANT. In the case of being an irrelevant belief for SOX compatibility, its weight will be null (0), and their value won’t be relevant (it doesn’t apply, NA).

The final valuation of SOX compatibility of the whole agent over the specific process that is being analyzed, will be calculated by an inference rule describe more in detailed in the next protocol (Intra-Agent Decision Making Protocol). The final valuation of quality of the process analyzed by this agent, will be given by the weighted sum of all the quality values obtained in each one of the analyzed facts of the business case.

Table 5 describes more in detailed the Valuation Matrix over the Facts for the Purchase Orders Creation Process.

3.6. Intra-Agent Decision Making Protocol. (Intra-Agent Reasoning Process on SOX Compatibility Based on Deductive Argumentation. Conclusive Individual Phase of the Agent)

In this section it is shown the reasoning side of agent which uses a deductive argumentation protocol, makes its own decision about if the process of the analyzed business case is or not SOX compliant. This protocol is based on Classical Logic Theory or Logic of Predicates and the central base of this protocol is an inference rule which uses as arguments, the result of the valuation of beliefs from the previous phase (Agent’s Valuation Matrix over the Business Case Facts based on its Beliefs or Knowledge Base), specifically those relevant beliefs for SOX compatibility.

The objective of this protocol is to try to demonstrate the truthfulness of a hypothesis that establishes that the process that is being analyzed by this agent is compatible with the SOX legislation (Table 6).

To demonstrate the truthfulness of this hypothesis, the agent relies on the following elements:

1) Agent’s Beliefs or Agent’s Base Knowledge.

2) Information Seeking Dialog Protocol.

3) Facts Valuation Protocol based on Agent’s Beliefs.

4) Agent’s Valuation Matrix over the Business CaseFacts based on its Beliefs or Knowledge Base.

5) Dynamic Knowledge Learning Protocol.

6) Intra-Agent Decision Making Protocol.

And it is in fact in this last element (Intra-Agent Decision Making Protocol), where we are. Here, the agent

Table 4. Agent’s valuation matrix over the facts.

Table 5. Agent’s valuation matrix over the purchase orders creation process.

Table 6. Agent’s hypothesis.

will determine the truthfulness or not of the correspond ing hypothesis based on an inference rule. This inference rule will come specified in advance by a combination of the agent’s beliefs or the agent’s initial knowledge with a learning factor that will gather the previous accumulated experience in past business cases, together with the option of new dynamic knowledge collected by a human expert just in case of needed (Figures 2 and 3).

This protocol uses notation of Classical Logic or Predicates Logic with its logical operators: ┐ (negation), ▲ (conjunction), ▼ (disjunction), → (implication), ↔ (biconditional).

The arguments to be used in this protocol are: 1) Creation of Purchase Orders, 2) Monitoring of Purchase Orders and 3) Learning Factor. First two arguments represent the agent’s static knowledge based on their beliefs or base knowledge. The third argument represents its experience or dynamic knowledge, it means, the knowledge that this agent has acquired as the time went on in the analysis of other business cases.

The arguments that represent the static knowledge here used and that are part of the antecedent of the inference rule, are the result of the valuation of their boolean respective functions in the process followed with the Facts Valuation Protocol based on Agent’s Beliefs for SOX compatibility, and therefore they are variables with true (t) or false (f) value.

The argument that represents the dynamic knowledge will also have true (t) or false (f) value depending on the result of the learning protocol. This learning protocol will take into consideration evidences presented by the business case in this specific process.

SOX_COMPLIANT is defined like a boolean function or logical predicate that can take boolean true (t) or false (f) values and its semantic represents the compatibility with the SOX regulation. SOX_COMPLIANT (PROCESS_OF_PURCHASE_ORDERS_CREATION) composes the consequent of the main inference rule and therefore based on its arguments, this rule allows us to obtain its truthfulness or falsehood. The conclusion is represented by the consequent of the previous inference rule and its truthfulness will depend on the truthfulness of the predicates that form the antecedent of the rule.

These previous inference rules establish that SOX_COMPLANT (PROCESS_OF_PURCHASE_ORDERS_CREATION) will be true if their four antecedents belonging to the static knowledge (Arguments 1, 2, 3 and 4) are true at the same time, or, if the learning Factor 5) that represents the dynamic knowledge indicates this truthfulness. That is to say SOX_COMPLIANT (PROCESS_OF_PURCHASE_ORDERS_CREATION) will be true (t) if all critical beliefs for SOX compatibility (static knowledge) are true, or, although they weren’t, it will be also true (t) if its dynamic knowledge (learning factor) indicates it, based on its past experiences. This means that the Dynamic Knowledge Learning Protocol will be taken in use only if the initial static knowledge by itself can not determine a positive SOX compatibility.

The truthfulness or not of SOX_COMPLIANT (PROCESS_OF_PURCHASE_ORDERS_CREATION) will allow us to demonstrate or to reject the hypothesis previously outlined. NON_SOX_COMPLIANT (PROCESS_OF_PURCHASE_ORDERS_CREATION) is defined as well as a boolean function or logical predicate which can take true (t) or false (f) values and is the logical complementary predicate of SOX_COMPLIANT.

3.7. Dynamic Knowledge Learning Protocol

The agent uses its static knowledge or fundamental beliefs to determine the SOX compatibility of the analyzed Purchase Orders Creation Process. If the static knowledge can not determine a positive SOX compatibility, this Dynamic Knowledge Learning Protocol will be

Figure 2. Main rule.

Figure 3. Complementary rule.

taken in use. There is the possibility that based on the agent’s previous experience it can be verified if in similar cases with similar evidences and after consulting to the human expert, it was decided to value this process as compatible with SOX. In other words, to see if this case is an exception to the static knowledge of the agent.

There are specific situations that can go beyond the static initially predefined beliefs, and that they will be based on specific court judgments over real cases in which a very specific context after the analysis of the court gives a result of SOX compatibility even although static initial knowledge states a non SOX compatibility. It means we would be under exceptions of real cases that the human expert knows and that belong to court resolutions or decisions of the control organisms on specific business cases where a series of specific evidences, opposite to what it is indicate by the initial knowledge, would have determined a positive SOX compatibility. These exceptions, through the learning protocol, will allow our agent to learn and to evolve beyond the initial knowledge formed by its beliefs.

As indicated by [39] Capobianco, Chesñevar and Simari (2004), the agents should be able to adapt to dynamic and changing environments. Pinzon et al., (2011) establish the need of self-adaptation ability as an important characteristic in Multiagent Systems. In this line, [40] Fukumoto and Sawamura (2006) proposed a model in which the results or conclusions are backpropagated to the initial knowledge to enrich future possible argumenttations. With this protocol, the agent is able to change its beliefs, improving its knowledge beyond its initial state.

As the time goes on, the system should learn from its previous experiences (PE) with previous analyzed business cases as well as from the consultations to an external human expert (HE) representing the knowledge over recent court decisions on exceptional situations so it can be defined the following learning factor relationship (lf) that represent how the knowledge of the system is evolving with each new business case. Here, it can be seen how the previous experience combines with the opinion of the external human expert and feeds the “future” previous experience term, allowing the system to accumulate the knowledge and learn.

(1)

Given a state “t” in which the model is analyzing a specific business case, for each specific pair of evidences e1 and e2, it can be defined the learning factor (lf) as a function of the previous experience (pe) in that moment and the opinion of the human expert (he) taking into consideration the combination of both evidences.

(2)

is the activation factor of the previous experience (pe) on an specific instant t and for specific evidences e1 and e2. Its value on instant t will be 1 just in case there is previous experience for those evidences and 0 if no previous experience.

(3)

is the activation factor of the human expert (he) on an specific instant t and for specific evidences e1 and e2. Its value on instant t will be 1 just in case there is no previous experience for those evidences and 0 if previous experience for those evidences exists. This activation factor is the complement of the previous activation factor.

(4)

represents the previous experience and will exist just in case there is a previous learning factor for those specific evidences e1 and e2, in a previous instant before t. If that is the case, the specific activation factor will be 1.

(5)

This factor represents as well the accumulated experience.

(6)

Last but not least is the human expert indicator that will be activated by its activation factor just in case there is no previous experience available for indicated evidences in previous instants of time. This human expert factor will be 1 just in case the human expert indicates a positive SOX compatibility and 0 if negative SOX compatibility is determined.

(7)

And developing the learning factor initial expression we get the following:

(8)

(9)

(10)

(11)

(12)

...

And generalizing this development, we get the following expression that represents the accumulated learning experience via propagated past experiences or via consultation to the human expert. The consultation to the human expert in a specific instant of time for a pair of specific evidences e1 and e2 is propagated to the future via (pe) previous experience factor and will let us to reuse this specific consultation in similar future cases.

(13)

This expression represents the learning factor model here proposed and will take value 1 in case of positive SOX compatibility and 0 in case of negative SOX compatibility. This value will come via accumulated past experiences or via consultation to the human expert.

The following diagram represents this learning process and it will only be used when the static knowledge or the base beliefs establish a negative SOX compatibility. The learning process consists on checking the previously managed business cases by this agent, and based on the evidences provided by the present business case, see if there were cases in which the human expert indicated under a similar situation, a positive SOX compatibility. Otherwise, it will mean that there is not previous experience and the protocol will step to consult to the human expert with the evidences provided by this business case.

The human expert based on the knowledge of the matter and based on the knowledge of court specific resolutions will determine if there is or not a positive SOX compatibility. Just in case of a positive SOX compatibility, this compatibility will solve the present process of our business case and at the same time it will increase our agent’s knowledge for similar future cases, storing this decision in the dynamic knowledge base. Figure 4 describes more in detail this protocol.

Figure 4. Dynamic knowledge learning protocol.

The agent by itself and based on its experience over several analyzed business cases will grow up in knowledge and will fine tune its final conclusions. This part of agent learning begins to be useful during a massive use of the system with a big number of business cases and where specific cases show complex situations that comes out the static SOX regulation and where specific control organisms and courts need to take SOX compliant decisions that will be taken into consideration as precedents for future similar cases or situations.

These kinds of resolutions over exceptional situations not covered by the static SOX regulation will generate a jurisprudence base which experts can consult and apply using the learning protocol here described. At the same time the agent using this protocol is able to assimilate and add those resolutions to its initial knowledge growing in terms of knowledge.

There are several recent researches [41] Capera, et al., 2003; [42] Razavi, Perrot, & Guelfi, 2005; [43] Weyns, et al., 2004; [44] Zambonelli, Jennings & Wooldridge, 2003; [45] Ontañon & Plaza, 2006; [46] Parsons & Sklar, 2005, where it has being shown the need to design Multiagent Systems able to adapt to the changes happened in their closed environment. With this Learning Protocol our model follows this tendency being able to adapt to legislation changes and to exceptional situations, too.

4. Integration with a Higher Level Multi Agent Intelligent System

In Reference [47] Kakas, Maudet and Moraitis (2004) proposed an inter-agent communication model in which they should fulfill the communication protocols defined in advance, take into consideration both the individual agent preferences and the global objectives and being able to handle exceptional situations.

Here it is describe how the previously describe Argumentative SOX Compliant Decision Support Intelligent Expert System can be integrated in a higher level multiagent intelligent system to cover the full Purchasing Cycle. As already described, this Purchasing Cycle is commonly compose by nine key processes: 1) Suppliers’ Selection, 2) Suppliers’ Contracting, 3) Approval of Purchase Orders, 4) Creation of Purchase Orders, 5) Documentary Receipt of Orders, 6) Imports, 7) Check of Invoices, 8) Approval of Invoices without Purchase Order and 9) Suppliers’ Maintenance.

The previous proposed model has been designed to implement an intelligent agent to analyze the SOX compatibility of the Purchase Orders Creation Process. Here it is described how our intelligent agent can cooperate with other agents representing the rest of key processes of the Purchasing Cycle to compose a higher level multiagent system which could decide about the SOX compatibility of the full Purchasing Cycle.

This higher level Multiagent System should have nine different agents which correspond with these nine key processes. The objective of each individual agent will be to analyze the SOX compatibility of each key process. Once those individual agents have taken a decision about the SOX compatibility of their key processes, the agents should cooperate between each other trying to reach the final objective of the Multiagent System to decide if the full Purchasing Cycle of the analyzed business case is or not SOX compliant. To make this possible, it is needed that all agents establish a joint deliberative dialogue protocol in which they will cooperate together looking for a final decision about the SOX compatibility of the full Purchasing Cycle.

In Reference [48] Rodriguez, et al. (2004) reflects the fact that a good coordination is needed to let individual agents to cooperate together to reach the global objective on top of the individual ones. Here, in our model, this coordination is implemented via Joint Deliberative Dialogue Protocol.

After this Joint Deliberative Dialogue Protocol, the agents together as a whole Multiagent System will take the final decision with the Conclusive Interagent Decision Making Protocol. The idea behind this Multiagent System is that each agent has its individual objective and shares a common objective with the rest of agents of the system.

4.1. Joint Deliberative Dialog Protocol. (Cooperative Joint Phase with the Rest of the Multi Agent System)

Deliberative communication among agents is a key element in multiagent technology to let the full system to evolve towards a common agreed decision or step in its way to reach the final objective [49] Corchado & Laza, 2003; [50] Corchado, et al., 2003.

This section is dedicated to the Joint Deliberative Dialog Protocol, in which the agent will carry out a proposal towards rest of the agents that compose the Multiagent System. This proposal will consist on proposing that the corresponding process this agent monitors, based on the data obtained after having interrogated and analyzed the business case, be or not compatible with the SOX regulation (Figure 5).

As answers, each of the other agents will send to this agent during the deliberation process an attack message, contradicting its proposal, or a support message, supporting it. Veenen and Prakken in 2005 (Veenen J., Prakken, H., 2005) proposed a model in which agents are able to reject the original proposal at the same time they give a justified reason about it.

The attack message that an agent will answer to another with the objective of contradicting its initial proposal will consist on sending an opposite message to the one proposed. That is to say, if a SOX_COMPLIANT

Figure 5. Joint deliberative dialog protocol (inquire).

(compatible with the SOX regulation) was proposed, a NON_SOX_COMPLIANT (not compatible with the SOX regulation) would be answered. If a NON_SOX_COMPLIANT is proposed, a SOX_COMPLIANT would be answered.

The support message that an agent will answer to another with the objective of supporting its initial proposal will consist on sending a message that reaffirms and support the agent’s proposal. That is to say, if a SOX_COMPLIANT was proposed, a SOX_COMPLIANT would be answer and if a NON_SOX_COMPLIANT was proposed, a NON_SOX_COMPLIANT would be answered (Figure 6).

At the end of this protocol, and after all the agents in an individual way have decided about the compatibility or not with the SOX regulation of their process, the system will be in a stage in which all the agents know the results or individual decisions made by the rest of agents.

There are in the literature several studies [51] Esteva, et al., 2001; [52] Hubner, Sichman & Boissier, 2004; [53] Parunak & Odell, 2002 showing the fact that Multiagent Systems need a higher level of organization to coordinate all the agents of the system. The Joint Deliberative Dia logue Protocol proposes a parallel alternative in which all the agents share its individual findings among the rest of the agents of the system with final idea that in a further phase, all those agents together will use this shared knowledge to find a common agreed decision about the final compatibility over the full Purchasing Cycle.

4.2. Inter-Agent Decision Making Protocol. Process of Inter-Agent Reasoning on SOX Compatibility Based on Deductive Argumentation. Conclusive Joint Phase of the Multi Agent System

In this section it is shown the final decision protocol in which the Multiagent System will decide if the analyzed business case is or not compatible with the SOX legislation based on the individual decisions of each of the agents of the full system. Our Multiagent System is formed by a group of agents, each one has an individual specific objective and a global group objective shared by all the agents. Each individual objective will help its agent and the rest of agents to achieve the global common objective.

Each individual objective is focused on analyzing the SOX compatibility at the level of its corresponding key

Figure 6. Joint deliberative dialog protocol (inquire & answer).

process analyzed by its agent. The global common objective is focused on deciding at global system level if the analyzed business case is definitively compatible or not with the SOX regulation. Our Multiagent System has a final objective that is shared by all the agents that compose the system at the same time and that it is reflected in the following hypothesis (Table 7):

To demonstrate the truthfulness of this statement, all the agents rely on two fundamental elements:

1) The Joint Deliberative Dialog Protocol; and 2) The Inter-Agent Decision Making Protocol The first protocol allows that all agents can share between each other the individual results over SOX compatibility of their corresponding analyzed key processes. It means that each agent using this protocol communicates to the rest of the agents of the system the final result of its own objective. The second protocol allows them to use each agent’s individual conclusions (shared by the Joint Deliberative Dialog Protocol) to argue the final decision regarding the hypothesis previously outlined.

Figures 7 and 8 show the main and complementary inference rules used by this Inter-Agent Decision Making Protocol:

These inference rules combine the results of the individual objectives of the agents to reach the final conclusion about the business case compatibility with the SOX legislation.

In Reference [54] Morge and Mancarella (2007) proposed an argumentation model in which conflicts are so-

Table 7. Multi-agent’s hypothesis.

lved based on the arguments that justify each possible action. With this Inter-Agent Decision Making Protocol, even although each agent could have a different opinion about the SOX compatibility, a final common share decision is taking among the all agents that conforms the full system based on the previous indicated inference rule. Inference rule that fully justify this final decision.

This protocol uses notation of Classical Logic or Predicates Logic the following logical operators: ┐ (negation), ▲ (conjunction), ▼ (disjunction), → (implication), ↔ (biconditional). This deductive argumentation protocol has the objective to demonstrate the truthfulness or not of the previous hypothesis. The arguments here used that constitute the antecedent of the inference rule, are the result of the previous deliberation process where each agent carries out its proposal of positive or negative SOX compatibility, and where rest of agents support or attack that proposal based on its internal reasoning.

SOX_COMPLIANT is defined as a boolean function or logical predicate that can take true (t) or false (f) values and whose semantic represents the compatibility with the SOX regulation. SOX_COMPLIANT (BUSINESS_CASE) composes the consequent of the main inference rule and therefore based on the arguments previously obtained by this agent, this rule will indicate its truthfulness just in case all the elements that compose the antecedent are true. The conclusion is represented by the consequent of the previous inference rule and its truthfulness will depend on the truthfulness of the predicates that compose the antecedent of the rule.

The truthfulness or not of SOX_COMPLIANT (BUSINESS_CASE) will allow to demonstrate or reject the hypothesis previously outlined on the business case. NON_SOX_COMPLIANT (BUSINESS_CASE) is defined like a boolean function or logical predicate that can take true (t) or false (f) values. NON_SOX_COMPLIANT is the logical complementary predicate of SOX_COMPLIANT.

5. Case Study

Here it is presented an example of application of the proposed model to a real business case. It is shown the procedure that would follow the Argumentative SOX Compliant Decision Support Intelligent Expert System here described to quantify on one hand the level of quality of the Purchase Orders Creation Process of an specific business case, and on the other hand to determine if this specific process is compatible with the SOX regulation.

These results have been collected applying our system over a real business case. This business case was a real project happened in a European country in 2010 and covered all needed tasks to replace the radio network elements of one specific mobile telecommunications operator in one country for similar equipment of another manufacturer.

It has been done a dissociation procedure, trying to abstract the business case to avoid make allusions to marks neither commercial products, so that we can concentrate on the business essentials to understand the right application of our system to the analyzed business case. More in detailed, the project consisted on replacing 3790 BTS radio equipments (Base Transceiver Station) of the mobile telecommunications network of the telecom operator, distributed through the hole country.

Here it is explained at descriptive level what is a BTS equipment and where it is located inside the context of a telecommunications network of a mobile telecom operator. In mobile telecommunications sector (Fernandez, 2006), telecom operators, making use of the license granted by the government, provide communication services based on voice and data to the end users. To provide those services, each operator has its own network or telecommunications infrastructure. These equipments are bought by telecom operators to the manufacturers of networks infrastructure.

A typical mobile network of a telecom operator is formed by interconnection of different equipments with the objective to establish the end to end communication. In the following diagram, it is described the general structure of a mobile telephone network with its elements (Figure 9).

One of the most typical elements that are visible in our environment, are the antennas that we can see above the buildings or in the middle of the field. Each one of those antennas has the function to cover a certain geographical area. The coverage of this antenna with its radio frequency allows end users to be able to make calls with their mobile telephones. Each antenna covers a geographical area called “cell”. This is the reason why the mobile telephony is also called cellular telephony. Each operator covers with its antennas the whole geographical

Table 8. Agent’s valuation matrix over the business case facts based on its beliefs.

Figure 9. GSM and GPRS network elements.

territory (national) in which it operates.

These antennas are connected to BTS equipments which manage and control the antennas. They manage and control the air interface. BTSs are organized in groups and those groups are controlled by other equipments called BSC and these BSCs are connected to other equipments called MSC’s. These equipments allow the needed circuits switching to make the voice call possible. Other equipment not less important in the context of a mobile telecommunications network are the GPRS equipments, able to implement services based on data transmission by package switching. BTSs control theantennas, manage the air interface and implement the 3G mobile technology.

As we have commented, the project which constitutes our business case consisted on replacing all the BTS of one manufacturer for another, in short 3790 BTSs, distributed by the whole country.

The analyzed business case is focus on the whole followed process to contract the needed services to replace these 3790 BTSs by experienced companies in this type of tasks. Those specific services would be, from the removal of the existent BTS, adaptation of the location and needed civil work, installation of new BTS, needed configuration, start up and system acceptance tests. More in detail, our system has analyzed the Purchase Orders Creation Process of that business case.

The reasons that can support the decision of substitution of operator’s network equipments from one manufacturer to another can be different as for example: 1) commercial, due to special agreements between the telecommunications operator and the manufacturer, 2) strategic, derived of strategic decisions of the management committee of the telecommunications operator and 3) of Market, for example due to coalitions or acquisitions.

There were twenty different companies invited to the competition. All those companies were invited to participate on the Suppliers Selection Process to select a group able to implement the project with quality and in reasonable time. The competition was done over four phases of requests for quotations, where it was given detailed information of the project to the invited companies and at the same time some discounts were requested till an acceptable level of pricing. With the information gathered during these four phases, it was carried out the selection process, in which were kept in mind besides the economic approaches, all those aspects and details needed to take the final selection. At the end of the competition between all the initial 20 invited companies, only 5 were selected.

6. Results

Here it is shown the results obtained after applying the proposed model to the previously explained real business case. The following table summarizes the results of the firsts two protocols: 1) Information Seeking Dialog Protocol and 2) Facts Valuation Protocol based on Agent’s Beliefs (Table 8).

According to the Facts Valuation Protocol based on the Agent’s Beliefs, between all beliefs of the agent’s static knowledge, all of them are decisive for the SOX compatibility. These beliefs determine as well the quality of the followed process in the analyzed business case.

From quality point of view all the key facts of the business case have obtained the maximum value as indicated in Table 8, and according to the weight factors, the final punctuation has the maximum value too.

From SOX compliance point of view, both relevant SOX facts have obtained a true value according to the Facts Valuation Protocol based on Agent’s Beliefs.

The valuation of these key SOX facts are the inputs for the Intra-Agent Decision Making Protocol during the conclusive individual phase of the agent (Figures 10 and 11).

According to the Intra-Agent Decision Making Proto

Figure 10. Purchase orders creation process. Intra-agent decision making main inference rule.

Figure 11. Purchase orders creation process. Intra-agent decision making complementary inference rule.

col, the first two antecedents of the main rule, are true and therefore it is not necessary to appeal to the third antecedent (LEARNING_FACTOR) to be able to conclude that SOX_COMPLIANT (PROCESS_OF_PURCHASE_ORDERS_CREATION) is true. The previous reasoning process, based on the agent’s static knowledge, has been able to state that the followed Purchase Orders Creation Process is compatible with the SOX regulation, and it has not been needed to use the knowledge based on the agent’s past experiences neither to a human expert to make the decision.

In this case the agent and their static knowledge have been enough to reach the conclusion. This fact is positive in the sense that the process has followed the SOX legislation rigorously (Table 9) but on the other hand, it has not allowed the agent to be able to learn, to be able to increase its dynamic knowledge. Finally, the present agent concludes that the followed process of the analyzed business case is SOX_COMPLIANT.

Nowadays and in relation to the model here design, after revising different international bibliographical sources and up to the best of our knowledge it isn’t found any publication that uses Multiagent Systems and Theory of Argumentation in the implementation of internal controls SOX with the objective of identify if a Purchase Orders Creation Process of an specific business case is or not compatible with the SOX Law supporting auditors and companies to take their appropriate decisions about this SOX compliance. Due to it, trying to compare our model with other existing models, although it was not possible to identify similar existing models, we have try to select models that at least use similar technologies to the one here presented.

ARGUGRID is an existing model designed under the sixth R&D framework program of the European Union with its main focus on e-business area and using Multiagent Systems and Argumentation Theory as its main technologies. The following table (Table 10) shows the comparison of both models taking into consideration several relevant features.

7. Conclusions

The crash in United States of big multinationals as Enron or WorldCom around 2001 shows the widespread use of financial fraudulent methods with the main objective to show a really good financial health to promote the market shares and collect a great number of investors.

The legislation in effect in that moment was insufficient to avoid this type of operations. At the same time it generated ambiguity at the time to define a legal civil and penal appropriate framework to judge responsible persons for this kind of criminal behaviors. As an answer to this situation and after observing the crash of big companies with thousands of employees, the widespread fall of the financial markets, the big losses of the shareholders, the mass dismissals trying to compensate the losses on the stock exchange markets and the general lack of credibility in all the world wide markets, United States, in July 2002, approved the SOX Law.

This Law marks an inflexion point in the Government’s control on the economic and financial operations of the companies in United States and as well, in the companies that operate in their stock exchange financial markets.

Law defines a clear civil and penal legal framework, with the objective of returning both transparency and self-confidence to the financial markets. Law is in force inside the United States, and has as well a great international impact due to the high grade of present globalizetion and to the big number of countries in which US multinationals operate. Law is converted into a facto standard of financial transparency for all those companies that for obligation (inside the United States), or in an indirect way (branch of a multinational in another country) begin to be complaint with it. Law returns the calm and self-confidence to the financial markets, being based on a clear definition of civil and penal legal responsibilities for those who carry out financial fraudulent operations with the objective of deceiving the investors. Law that states what is necessary to make, but does not say how it is necessary to make it.

Law gives the opportunity to the Artificial Intelligence to be a powerful tool for companies and auditors to help them in making decisions on if certain operations, projects or business cases, are or not compatible with the SOX Law, before it will be too much late, and to take the right corrective actions before its main directive management, the General Director and the Financial Director, personally certify that those results are real, correct and they don’t hide financial fraudulent operations. After the certification, there will be no way to back.

Regarding Multiagent Systems and their relationships with Theory of Argumentation, it is needed to remark that Theory of Argumentation finds in Multiagent Systems a wide range of practical applications, allowing Multiagent Systems to use a full formal solid theory,

Table 9. Agent’s hypothesis.

Table 10. Comparison proposed model versus ARGUGRID.

where the formal present models in Theory of Argumentation offer a wide range of possibilities in the design of Multiagent Systems. At the same time Theory of Argumentation provides to Multiagent Systems a syntactic and semantic base which helps the design of this kind of systems.

A fundamental characteristic of Multiagent Systems is the communication between the elements or agents that compose it, and it is in this communication, in these dialogues and in these messages, where the Multiagent Systems are closely related to the Theory of Argumentation, because provides them a formal structure based on preexisting argumentative models.

The beginning of the relationship between Theory of Argumentation and Multiagent Systems starts up around 1995 with the publication of the article of [4] Dung (1995) where he gives to the Theory of Argumentation an important scientific view very useful for Artificial Intelligence, however this relationship consolidates around 2004, with the appearance of numerous scientific articles, congersses and international projects that nowadays have an important impact.

On the other hand, the appearance of the SOX Law in 2002, forces the present information systems used in the companies to implement a coherent set of internal controls to cover all the needs in terms of information to be available to the General Director and to the Financial Director to let them to personally certify the financial results published by their company. These internal controls are focused to monitor whatever minimum detail of the economic transactions carried out by the departments of purchases, sales or finances of the company.

The present work combines different subjects like Artificial Intelligence and Expert Systems, Theory of Classical Logic or Logic of Predicates, Financial Engineering, Management and Control of Business Processes and Theory of Argumentation, and up to the best of our knowledge is a pioneer initiative in the application of Expert Argumentative Multiagent Systems as a support tool to make decisions over SOX compatibility.

In the same way, this work demonstrates how Multiagent Systems in combination with Theory of Argumentation is a powerful tool that goes beyond the typical transactional report systems, and its use can be of great help in taking decisions by the General Director, by the Financial Director, by the management team, by the auditors and by the control organisms when deciding if a certain transaction, operation, project or business case is or not compatible with the SOX legislation.

Last but not least, as indicated before, the problem here presented is a 1) decision making problem; 2) that needs to be based on evidences; 3) that needs an initial expert non standardized knowledge; and 4) been able to learn from present court resolutions. The model here presented is a novel approach to solve this kind of problems due to the fact that it has an optimized structure to solve this specific problem, incorporates an initial expert knowledge base coming from the experience of a human expert and incorporates an specific learning protocol to add present court resolutions to the initial knowledge base, letting the system to evolve far beyond its initial knowledge state, letting the system to increase its efficiency as the times goes on based on its accumulated experience.

8. Disclosure

The content of this paper reflects only the opinion of the authors with independence of their affiliations.

REFERENCES

  1. J. Fox, P. Krause and S. Ambler, “Arguments, Contradictions and Practical Reasoning,” Proceedings of the 10th European Conference on Artificial Intelligence (ECAI- 92), Vienna, 3-7 August 1992, pp. 623-627.
  2. P. Krause, S. Ambler, M. Elvang-Goransson and J. Fox, “A Logic of Argumentation for Reasoning under Uncertainty,” Computational Intelligence, Vol. 11, No. 1, 1995, pp. 113-131. doi:10.1111/j.1467-8640.1995.tb00025.x
  3. Y. Dimpoulos, B. Nebel and F. Toni, “Preferred Arguments Are Harder to Compute than Stable Extensions,” Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99), Stockholm, 31 July- 6 August 1999, pp. 36-41.
  4. P. M. Dung, “On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and N-Person Games,” Artificial Intelligence, Vol. 77, No. 2, 1995, pp. 321-357. doi:10.1016/0004-3702(94)00041-X
  5. P. Besnard and A. Hunter, “Elements of Argumentation,” The MIT Press, Cambridge, 2008.
  6. T. J. M. Bench-Capon and P. E. Dune, “Argumentation in Artificial Intelligence,” Artificial Intelligence, Vol. 171, No. 10-15, 2007, pp. 619-641. doi:10.1016/j.artint.2007.05.001
  7. S. Kraus, K. Sycara and A. Evenchik, “Reaching Agreements through Argumentation: A Logical Model and Implementation,” Artificial Intelligence, Vol. 104, No. 1-2, 1998, pp. 1-69. doi:10.1016/S0004-3702(98)00078-2
  8. I. Rahwan and P. McBurney, “Argumentation Technology,” IEEE Intelligent Systems, Vol. 22, No. 6, 2007, pp 21-23. doi:10.1109/MIS.2007.109
  9. I. Rahwan and G. Simari, “Argumentation in Artificial Intelligence,” Springer, New York, 2009.
  10. G. Boella, J. Hulstijn and L. Torre, “A Logic of Abstract Argumentation,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 29-41.
  11. D. N. Walton and C. W. Krabbe, “Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning,” Suny Press, Albany, 1995.
  12. E. Cogan, S. Parsons and P. McBurney, “New Types of Inter-Agent Dialogues,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in MultiAgent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 154-168.
  13. L. Amgoud and N. Hameurlain, “An ArgumentationBased Approach for Dialog Move Selection,” In: N. Maudet, S. Parsons and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2006), Vol. 4766, Springer, Berlin, 2007, pp. 128-141.
  14. Y. Tang and S. Parsons, “Argumentation-Based MultiAgent Dialogues for Deliberation,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 229-244.
  15. L. Amgoud, N. Maudet and S. Parsons, “Modelling Dialogues using Argumentation,” Proceedings of the 4th International Conference on Multi-Agent Systems (ICMAS- 2000), Boston, 10-12 July 2000, pp. 31-38.
  16. C. Reed, “Dialogue Frames in Agent Communication,” Proceedings of the 3rd International Conference on Multi Agent Systems (ICMAS-98) Paris, 3-7 July 1998, pp. 246-253.
  17. S. Parsons, M. Wooldridge and L. Amgoud, “On the Outcomes of Formal Inter-Agent Dialogues,” ACM Press, New York, 2003.
  18. E. Sklar and S. Parsons, “Towards the Application of Argumentation-Based Dialogues for Education,” Proceedings of the 3rd International Conference on Autonomous Agents and Multi-Agent Systems, New York, 23 July 2004, pp. 1420-1421.
  19. A. Belesiotis, M. Rovatsos and I. Rahwan, “A Generative Dialogue System for Arguing about Plans in Situation Calculus,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 23- 41.
  20. J. Devereux and C. Reed, “Strategic Argumentation in Rigorous Persuasion Dialogue,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 94-113.
  21. P.-A. Matt, F. Toni and J. Vaccari, “Dominant Decisions by Argumentation Agents,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in MultiAgent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 42-59.
  22. M. Wardeh, T. Bech-Capon and F. Coenen, “Multi-Party Argument from Experience,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in MultiAgent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 216-235.
  23. M. Morge and P. Mancarella, “Assumption-Based Argumentation for the Minimal Concession Strategy,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 114-133.
  24. M. Thimm, “Realizing Argumentation in Multi-Agent Systems Using Defeasible Logic Programming,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 175-194.
  25. C. Changchit, C. Holsapple and D. Madden, “Positive Impacts of an Intelligent System on Internal Control Problem Recognition,” Proceedings of the 32nd Hawaii International Conference on System Sciences, Maui, 5-8 January 1999, p. 10.
  26. R. Meservy, “Auditing Internal Controls: A Computational Model of the Review Process (Expert Systems, Cognitive, Knowledge Acquisition, Validation, Simulation),” PhD Thesis, University of Minnesota, Minneapolis, 1985.
  27. S. O’Callaghan, “An Artificial Intelligence Application of Backpropagation Neural Networks to Simulate Accountants’ Assessments of Internal Control Systems Using COSO Guidelines,” PhD Thesis, University of Cincinnati, Cincinnati, 1994.
  28. F. Liu, R. Tang and Y. Song, “Information Fusion Oriented Fuzzy Comprehensive Evaluation Model on Enterprises’ Internal Control Enviroment,” Proceedings of the 2009 Asia-Pacific Conference on Information Processing, Shenzhen, 18-19 July 2009, pp. 32-34. doi:10.1109/APCIP.2009.16
  29. A. Kumar and R. Liu, “A Rule-Based Framework Using Role Patterns for Business Process Compliance,” In: N. Bassiliades, G. Governatori and A. Paschke, Eds., Proceedings of the International Symposium on Rule Representation, Interchange and Reasoning on the Web, Vol. 5321, Orlando, 30-31 October 2008, pp. 58-72. doi:10.1007/978-3-540-88808-6_9
  30. C. Changchit and C. W. Holsapple, “The Development of an Expert System for Managerial Evaluation of Internal Controls,” Intelligent Systems in Accounting, Finance and Management, Vol. 12, No. 2, 2004, pp. 103-120. doi:10.1002/isaf.246
  31. A. Korvin, M. Shipley and K. Omer, “Assessing Risks Due to Threats to Internal Control in a Computer-Based Accounting Information System: A Pragmatic Approach Based on Fuzzy Set Theory,” Intelligent Systems in Accounting, Finance and Management, Vol. 12, No. 2, 2004, pp. 139-152. doi:10.1002/isaf.249
  32. A. Deshmukh and L. Talluru, “A Rule-Based Fuzzy Reasoning System for Assesing the Risk of Management Fraud,” Intelligent Systems in Accounting, Finance & Management, Vol. 7, No. 4, 1998, pp. 223-241. doi:10.1002/(SICI)1099-1174(199812)7:4%3C223::AID-ISAF158%3E3.0.CO;2-I
  33. K. M. Fanning and K. O. Cogger, “Neural Network Detection of Management Fraud Using Published Financial Data,” International Journal of Intelligent Systems in Accounting, Finance & Management, Vol. 7, No. 1, 1998, pp. 21-41. doi:10.1002/(SICI)1099-1174(199803)7:1%3C21::AID-ISAF138%3E3.0.CO;2-K
  34. J. Coakley, L. Gammill and C. Brown, “Artificial Neural Networks in Accounting and Finance,” Oregon State University, Corvallis, 1995.
  35. K. M. Fanning and K. O. Cogger, “A Comparative Analysis of Artificial Neural Networks Using Financial Distress Prediction,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 3, 1994, pp. 241-252.
  36. O. J. Welch, T. E. Reeves and S. T. Welch, “Using a Genetic Algotithm-Based Classifier System for Modeling Auditor Decision Behaviour in a Fraud Setting,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 7, No. 3, 1998, pp. 173-186. doi:/10.1002/(SICI)1099-1174(199809)7:3<173::AID-ISAF147>3.0.CO;2-5
  37. R. P. Srivastava, S. K. Dutta and R. W. Johns, “An Expert System Approach to Audit Planning and Evaluation in the Belief-Function Framework,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 5, No. 3, 1996, pp. 165-183.
  38. S. Sarkar, R. S. Sriram and S. Joykutty, “Belief Networks for Expert System Development in Auditing,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 5, No. 3, 1998, pp. 147-163. doi:/10.1002/(SICI)1099-1174(199609)5:3<147::AID-ISAF108>3.0.CO;2-F
  39. M. Capobianco, C. Chesñevar and G. R. Simari, “An Argument-Based Framework to Model an Agent’s Beliefs in a Dynamic Environment,” In: I. Rahwan, P. Moraitis and C. Reed, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2004), Vol. 3366, Springer, Berlin, 2005, pp. 95-110.
  40. T. Fukumoto and H. Sawamura, “Argumentation-Based Learning,” In: N. Maudet, S. Parsons and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2006), Vol. 4766, Springer, Berlin, 2007, pp. 17-35.
  41. D. Capera, J. P. Georgé, M. P. Gleizes and P. Glize, “Emergence of Organisations, Emergence of Functions,” AISB03 Convention, 2003.
  42. R. Razavi, J. Perrot and N. Guelfi, “Adaptive Modeling: An Approach and a Method for Implementing Adaptive Agents,” Massively Multi-Agent Systems, Vol. 3446, 2005, pp. 136-148.
  43. D. Weyns, K. Schelfthout, T. Holvoet and O. Glorieux, “Role Based Model for Adaptive Agents,” BASYS04 Convention, 2004.
  44. F. Zambonelli, N. R. Jennings and M. Wooldridge, “Developing Multiagent Systems: The Gaia Methodology,” ACM Transactions on Software Engineering and Methodology, Vol. 12, No. 3, 2003, pp. 317-370.
  45. S. Ontañon and E. Plaza, “Arguments and Counterexamples in Case-Based Joint Deliberation,” In: N. Maudet, S. Parsons and I. Rahwan, Eds., Argumentation in MultiAgent Systems (ArgMAS 2006), Vol. 4766, Springer, Berlin, 2007, pp. 36-53.
  46. S. Parsons and E. Sklar, “How Agents Alter Their Beliefs after an Argumentation-Based Dialogue,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 297-312.
  47. A. Kakas, N. Maudet and P. Moraitis, “Layered Strategies and Protocols for Argumentation-Based Agent Interaction,” In: I. Rahwan, P. Moraïtis and C. Reed, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2004), Vol. 3366, Springer, Berlin, 2005, pp. 64-77.
  48. S. Rodriguez, Y. de Paz, J. Bajo and J. M. Corchado, “Social-Based Planning Model for Multiagent Systems,” Expert Systems with Applications, Vol. 38, No. 10, 2011, pp. 13005-13023. doi:/10.1016/j.eswa.2011.04.101
  49. J. M. Corchado and R. Laza, “Constructing Deliberative Agents with Case-Based Reasoning Technology,” International Journal of Intelligent Systems, Vol. 18, No. 12, 2003, pp. 1227-1241. doi:/10.1002/int.10138
  50. J. M. Corchado, R. Laza, L. Borrajo, J. C. Yanes and M. Valiño, “Increasing the Autonomy of Deliberative Agents with a Case-Based Reasoning System,” International Journal of Computational Intelligence and Applications, Vol. 3, No. 1, 2003, p. 101 doi:/10.1142/S1469026803000823
  51. M. Esteva, J.-A. Rodríguez-Aguilar, C. Sierra, P. Garcia and J. L. Arcos, “On the Formal Specifications of Electronic Institutions,” In: F. Dignum and C. Sierra, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2001), Vol. 1991, Springer, Berlin, 2001, pp. 126-147. doi:/10.1007/3-540-44682-6_8
  52. J. F. Hübner, J. S. Sichman and O. Boissier, “Using the MOISE+ for a Cooperative Framework of MAS Reorganisation,” In: A. L. C. Bazzan and S. Labidi, Eds., Advances in Artificial Intelligence-SBIA 2004, Vol. 3171, Springer, Berlin, 2004, pp. 506-515.
  53. H. Van D. Parunak and J. J. Odell, “Representing Social Structures in UML,” In: M. J. Wooldridge, G. Weiß and P. Ciancarini, Eds., Agent-Oriented Software Engineering II, Vol. 2222, Springer, Berlin, 2002, pp. 1-16.
  54. M. Morge and P. Mancarella, “The Hedgehog and the Fox. An Argumentation-Based Decison Support System,” Proceedings of the 4th International Workshop on Argumentation in Multi-Agent Systems (ArgMas 2007), Springer, Berlin, 2008, pp. 55-68.