^{1}

^{*}

^{2}

^{*}

The notion of context provides flexibility and adaptation to cloud computing services. Location, time identity and activity of users are examples of primary context types. The motivation of this paper is to formalize reasoning about context information in cloud computing environments. To formalize such context-aware reasoning, the logic LCM of context-mixture is introduced based on a Gentzen-type sequent calculus for an extended resource-sensitive logic. LCM has a specific inference rule called the context-mixture rule, which can naturally represent a mechanism for merging formulas with context information. Moreover, LCM has a specific modal operator called the sequence modal operator, which can suitably represent context information. The cut-elimination and embedding theorems for LCM are proved, and a fragment of LCM is shown to be decidable. These theoretical results are intended to provide a logical justification of context-aware cloud computing service models such as a flowable service model.

The motivation of this paper is to formalize reasoning about context information in cloud computing environments. To formalize such context-aware reasoning, the logic LCM of context-mixture is introduced as a Gentzentype sequent calculus based on linear logic [1,2], which is known to be a useful resource-sensitive logic. LCM has a specific inference rule called the context-mixture rule and a specific modal operator called the sequence modal operator [3,4]. The cut-elimination and embedding theorems for LCM are proved as the main results of this paper. A fragment of LCM is also shown to be decidable. These theoretical results are intended to provide a concrete logical justification of context-aware cloud computing service models such as a flowable service model [5,6].

The definitions of cloud computing, including on-demand, pay-by-use, virtualized and dynamically-scalable, imply the characteristics of cloud computing environments [7,8]. Cloud-related issues have been discussed and studied based on the notion of contexts which include location, time, identity and activity of users. The use of context is known to be very important in cloud and ubiquitous computing.

There is a widely accepted definition of context [

Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between the user and application, including the user and application themselves.

Location, time, identity and activity are primary context types for characterizing the situation of a particular entity. Contexts can be classified into three categories [

Context provides flexibility and adaptation to services. A flowable service, which is a new notion of context-aware cloud computing services, is a logical stream that organizes and provides circumjacent services in such a way that they are perceived by individuals as those naturally embedded in their surrounding environments [5,6,10-14]. A flow of service is a metaphor for a subconsciously controlled navigation that guides the user through fulfillment of a flowable service process that fits the user’s context and situation and runs smoothly with unbroken continuity in an unobtrusive and supportive way. Flowable services can be useful to context-aware cloud computing applications such as Cloud Campus which is the e-learning environment of Cyber University in Japan.

The original intention of the flowable service model is to apply resources in open cloud environments [5,6,10- 14]. The model uses intensifying context information to adjust services flow to be more usable. The model shares resources or services fairly and to utmost extent. To formalize reasoning about the context-aware flowable service model in open cloud computing environments, we need an appropriate logic that can represent the following three items:

1) Context-mixture rule;

2) Resource-sensitive reasoning;

3) Context information.

In this paper, the logic LCM of context-mixture, which can represent the above three items (context-mixture rule, resource-sensitive reasoning and context information), is introduced as a Gentzen-type sequent calculus based on linear logic. LCM has a specific inference rule called the context-mixture rule, which can naturally represent a mechanism for merging formulas with context information.

Merging formulas with context information, which represents an interaction between different context information, is required for suitable representation of context-aware flowable services, since to handle various kinds of context information is an important issue for flowable services. We call here such a merging mechanism context-mixture.

The notion of context-mixture is also important for representing deployment models in cloud computing environments [

The context-mixture rule of LCM is of the form:

where the multisets and of formulas with context information are mixed by this rule.

The rule (mixture) was introduce in [

This original rule and the corresponding Hilbert-style axiom scheme have been studied in formalizing “relevant” human reasoning [16,17], grammatical reasoning [

As presented in [

The logic LCM of context-mixture is obtained from linear logic [1,2] by adding the context-mixture rule (mixture) and a sequence modal operator, which represents a sequence of symbols. By the sequence modal operator in LCM, we can appropriately express “context information” in “resource-sensitive reasoning”.

The notion of “resources”, encompassing concepts such as processor time, memory, cost of components and energy requirements, is fundamental to computational systems. This notion is also very important for handling efficient resource management in cloud computing environments [

Linear logic can elegantly represent the notion of “resources” [

This example means “if we spend two coins, then we can have a cup of coffee and as much of water as we like” when the price of coffee is two coins and water is free. It is to be noted that this example cannot be expressed using classical logic, since the formula (two coins) in classical logic is logically equivalent to coin (one coin), i.e., classical logic has no resource-awareness.

In order to discuss certain real and practical examples, the resource descriptions in linear logic should be more fine-grained and expressive and capable of conveying context information. For example, the following expressions may be necessary for some practical situations:

These examples respectively mean “in a teashop, if John spends three coins, then he can have a cup of coffee after two minutes and a cup of water after one minute,” and “in a cafeteria, if John expends two coins, then he can have a cup of coffee after one minute.” In these examples, the expressions, , and, which are regarded as “context information”, can naturally be represented by the sequence modal operator in LCM.

As presented in [

which respectively mean:

“if a client sends an incorrect user ID and a correct password to login to a server at the -th login attempt, then the server returns an error message to the client.”

“if a server returns the error messages more than twice to a client, then the server returns the password reject message to the client.”

Note that the error messages are expressed as a “resource” by using the connectives and, and the “information” on servers, clients, and login-attempts is expressed by the sequence modal operator.

The reason underlying the use of the notion of “sequences” in the sequence modal operator is explained below. The notion of “sequences” is fundamental to practical reasoning in computer science because it can appropriately represent “data sequences”, “program-execution sequences”, “action sequences”, “time sequences” etc. The notion of sequences is thus useful to represent the notions of “information”, “attributes”, “trees”, “or-ders”, “preferences” and “ontologies”. To represent “context information” by sequences is especially suitable because a sequence structure gives a monoid with informational interpretation [

1) M is a set of pieces of (ordered or prioritized) information (i.e., a set of sequences);

2) ; is a binary operator (on M) that combines two pieces of information (i.e., a concatenation operator on sequences);

3) is the empty piece of information (i.e., the empty sequence).

Based upon the informational interpretation, a formula of the form intuitively means that “α is true based on a sequence of (ordered or prioritized) information pieces.” Further, a formula of the form, which coincides with α, intuitively means that “α is true without any information (i.e., it is an eternal truth in the sense of classical logic).”

Prior to the precise discussion, the language of the proposed logic is introduced below. Formulas are constructed from propositional variables, 1 (multiplicative truth constant), (additive truth constant), (additive falsity constant), (implication), (conjunction), (fusion), (disjunction), (exponential), and (sequence modal operator) where is a sequence. Sequences are constructed from atomic sequences, (empty sequence) and; (composition). Lower-case letters are used for sequences, lower-case letters are used for propositional variables, Greek lower-case letters are used for formulas, and Greek capital letters are used for finite (possibly empty) multisets of formulas. For any, an expression is used to denote the multiset . The symbol is used to denote the equality of sequences (or multisets) of symbols. An expression means, and expressions and mean. A sequent is an expression of the form where is nonempty. It is assumed that the terminological conventions regarding sequents (e.g., antecedent and succedent) are the usual ones. If a sequent S is provable in a sequent calculus L, then such a fact is denoted as or. The parentheses for is omitted since is associative, i.e., and for any formulas α, β and γ. A rule R of inference is said to be admissible in a sequent calculus L if the following condition is satisfied: for any instance

of R, if for all i, then.

Definition 2.1. Formulas and sequences are defined by the following grammar, assuming p and e represent propositional variables and atomic sequences, respectively:

The set of sequences (including the empty sequence) is denoted as SE. An expression is used to represent with and, i.e., can be the empty sequence. Also, an expression is used to represent with and.

The logic LCM of context-mixture is then introduced below.

Definition 2.2. The initial sequents of LCM are of the form: for any propositional variable p,

The cut rule of LCM is of the form:

The context-mixture rule of LCM is of the form:

The sequence rules of LCM are of the form:

The logical inference rules of LCM are of the form:

It is remarked that Girard’s intuitionistic linear logic ILL is a subsystem of LCM: It is obtained from LCM by deleting (mixture) and the sequence modal operators.

The sequents of the form for any formula α are provable in cut-free LCM. This fact is shown by induction on α.

The (possibly empty) multiset expression in (mixture) is needed to show the cut-elimination theorem for an extended linear logic with (mixture) [

Proposition 2.3. The following rules are admissible in cut-free LCM.

Proof. Straightforward. Here, we show only for the rule (regu) by induction on the proofs P of in cut-free LCM. We distinguish the cases according to the last inference of P. We show only the following cases.

Case (): The last inference of P is of the form:. In this case, is also an initial sequent.

Case (left): The last inference of P is of the form:

By induction hypothesis, we obtain:

and

.

We then obtain the required fact:

■

An expression means two sequents and.

Proposition 2.4. The following sequents are provable in cut-free LCM: for any formulas and any1) where;

2) where;

3);

4).

Definition 3.1. LM is obtained from LCM by deleting {(;left), (;right)} and all the expressions as appearing in the initial sequents and the logical inference rules. The names of the logical inference rules of LM are denoted by labeling “” in superscript position, e.g.,

().

The logic LM is equivalent to a logic introduced in [

Definition 3.2. We fix a countable set of propositional variables, and define the sets

of propositional variables wherei.e.,. The language (or the set of formulas) of LCM is obtained from, and. The language (or the set of formulas) of LM is obtained from, and.

A mapping from to is defined by:

1) for any,;

2) where;

3) where ;

4);

5).

Let be a set of formulas in. Then, an expression means the result of replacing every occurrence of a formula α in by an occurrence of.

Theorem 3.3. (Embedding) Let be a multiset of formulas in, γ be a formula in, and f be the mapping defined in Definition 3.2. Then:

.

Proof. (): By induction on the proofs P of in LCM. We distinguish the cases according to the last inference of P. We show some cases.

Case: The last inference of P is of the form:. Since by the definition of f, we obtain the required fact

.

Case (mixture): The last inference of P is of the form:

By induction hypothesis, we have and . Then, we obtain the required fact:

Case (right1): The last inference of P is of the form:

By induction hypothesis, we have

. Then, we obtain the required fact:

where coincides with

by the definition of f.

Case (; left): The last inference of P is of the form:

By induction hypothesis, we have

. Then, we obtain the required fact, since coincides with by the definition of f.

: By induction on the proofs Q of in LM. We distinguish the cases according to the last inference of Q. We show some cases.

Case: The last inference of Q is of the form:

where coincides with

by the definition of f. By induction hypothesis, we have and

. Then, we obtain the required fact:

Case (cut): The last inference of Q is of the form:

Since is in, we can obtain by induction on. Then, by induction hypothesis, we have and. We then obtain the required fact by using (cut) in LCM. ■

Theorem 3.4. (Cut-elimination) The rule (cut) is admissible in cut-free LCM.

Proof. We have the following modified statements of Theorem 3.3:

1) if, then;

2) if, then .

To show the second statement, we do not need to prove the case for (cut) as in Theorem 3.3.

We now prove the cut-elimination theorem for LCM as follows. Suppose. Then, we have by the modified statement 1) of Theorem 3.3, and hence by the cut-elimination theorem for LM. By the modified statement 2) of Theorem 3.3, we obtain . ■

Corollary 3.5. (Consistency) LCM is consistent, i.e., the empty sequent is not provable in cut-free LCM.

In the following, we show that the -free fragment LCM of LCM is decidable. Before to show the decidability of LCM, we mention that LCM is undecidable. ILL is known to be undecidable. The proof of the undecidability of ILL is carried out by encoding Minsky machine. LCM can encode Minsky machine in the same way as in ILL, since LCM is an extension of ILL.

Definition 3.6. LCM is obtained from LCM by deleting {(left), (right), (co), (we)}, i.e., LCM is the -free fragment of LCM.

Definition 3.7. LM is obtained from LCM by deleting {(;left), (;right)} and all the expressions as appearing in the initial sequents and the logical inference rules.

Theorem 3.8. (Decidability) LCM is decidable.

Proof. The provability of LCM can be transformed into that of LM by the restriction of Theorem 3.3. Since LM is decidable, LCM is also decidable. ■

In this paper, the logic LCM of context-mixture, which can suitably express context information in cloud computting environments, was introduced. The cut-elimination and embedding theorems for LCM were proved, and the !-free fragment of LCM was shown to be decidable. LCM is based on an extended resource-sensitive (intuitionistic linear) logic with both the context-mixture rule (mixture) and the sequence modal operator [b]. The rule (mixture) of LCM can suitably represent a mechanism for merging formulas with context information, and the operator [b] of LCM can represent context information. A concrete logical foundation of reasoning about context information in cloud computing environments was thus obtained in this paper. Some technical remarks on LCM and some related works on context-aware modeling are addressed in the rest of this paper.

It is remarked that the sequence modal operator in LCM can be adapted to a wide range of non-classical logics. An extended intuitionistic linear logic with the sequence modal operator but without the context-mixture rule was shown to be useful for describing secure password authentication protocols [

The present paper was intended to provide a logical justification of context-aware cloud computing service models (such as a flowable service model) in cloud computing environments. We now give a survey of such context-aware model approaches. Context is used to challenge various issues in cloud and ubiquitous environments. Many context models have been proposed and developed: A key-value model, a markup model, an object-oriented model, and an ontology-based model (see [