Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process. Whether cognition is computation in the strict sense of adhering to Turing-Church thesis or needs additional constructs is a very relevant question for addressing the design of self-managing (autonomous) distributed computing systems. In this paper we argue that cognition requires more than mere book-keeping provided by the Turing machines and certain aspects of cognition such as self-identity, self-description, self-monitoring and self-management can be implemented using parallel extensions to current serial von-Neumann stored program control (SPC) Turing machine implementations. We argue that the new DIME (Distributed Intelligent Computing Element) computing model, recently introduced as the building block of the DIME network architecture, is an analogue of Turing’s O-machine and extends it to implement a recursive managed distributed computing network, which can be viewed as an interconnected group of such specialized Oracle machines, referred to as a DIME network. The DIME network architecture provides the architectural resiliency, which is often associated with cellular organisms, through auto-failover; auto-scaling; live-migration; and end-to-end transaction security assurance in a distributed system. We argue that the self-identity and self-management processes of a DIME network inject the elements of cognition into Turing machine based computing as is demonstrated by two prototypes eliminating the complexity introduced by hypervisors, virtual machines and other layers of ad-hoc management software in today’s distributed computing environments.
“It is a fundamental problem of science, and whether we study Gödel or Penrose, Lucas or Hofstadter, Searle or Dennett, everyone agrees that the basic question is whether human-minds are super-mechanical, though there is widespread disagreement about the answer.”1
Cockshott et al. [
This paper begins where their book ends by proposing a way to push the computation beyond its current limits circumventing the Gödel’s prohibition on self-reflection in computing systems. The limitations of computers that he helped design were very much on John von Neumann’s mind, who, spent a great deal of time thinking about designing reliable computers using unreliable components [
Autonomic computing, by definition implies two components in the system: 1) the observer (or the “self”) and 2) the observed (or the environment) with which the observer interacts by monitoring and controlling various aspects that are of importance. It also implies that the observer is aware of systemic goals in terms of best practices, to measure and control its interaction with the observed. Autonomic computing systems attempt to model system wide actors and their interactions to monitor and control various domain specific goals also in terms of best practices. However, cellular organisms take a more selfish view of defining their models on how they interact with their environment. The autonomic behavior in living organisms is attributed to the “self” and “consciousness” which contribute to defining one’s multiple tasks to reach specific goals within a dynamic environment and adapting the behavior accordingly.
The autonomy in cellular organisms comes from three sources:
1) Genetic knowledge that is transmitted by the survivor to its successor in the form of executable workflows and control structures that describe stable patterns to optimally deploy the resources available to assure the organism’s safe keeping in interacting with its environment.
2) The ability to dynamically monitor and control organism’s own behavior along with its interaction with its environment using the genetic descriptions and 3) Developing a history through memorizing the transactions and identifying new associations through analysis.
In short, the genetic computing model allows the formulation of descriptions of workflow components with not only the content of how to accomplish a task but also provide the context, constraints, control and communication to assure systemic coordination to accomplish the overall purpose of the system. That the machine learning need to mimic the learning behavior of at least the children to go beyond the mere book-keeping possible with the Turing machine limitations was not lost on Turing as he points this out explicitly [4,5].
“In the process of trying to imitate an adult human mind we are bound to think a good deal about the process which has brought it to the state that it is in. We may notice three components:
1) The initial state of the mind, say at birth;
2) The education to which it has been subjected;
3) Other experience, not to be described as education, to which it has been subjected.
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets (Mechanism and writing are from our point of view almost synonymous). Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.”
However, the child’s mind already comes with a genetic description of both execution and regulation models supporting the genetic transactions [
While Alan Turing and John von Neumann both looked at the computing model analogies with neural networks, and discussed hierarchical schemes to circumvent the consequences of Gödel’s theorems on the limitations of Turing machines, they could not have foreseen the current hardware breakthroughs that provide parallel computation threads in many-core processors with a hierarchy of high-bandwidth connections between the computing elements. In this paper, we describe an extension of the von Neumann stored program serial implementation of the Turing machine network using the same abstractions of self-management and regulation that provide the elegant execution of life’s workflows with appropriate context, constraints, control and communication processes. It exploits the performance, parallelism and high bandwidth networks available in the new generation processors to inject real-time cognition into Turing computing machines. In Section 2, we briefly review current arguments about cognition and computing and come on the side of cognition is more than computing. We identify the basic abstractions that are instrumental in providing the self-management features that capture the behavior of the observer and the observed with optimal resource utilization in a dynamic non-deterministic environment. In Section 3, we argue that the new DIME network architecture recently introduced injects the selfmanagement features in a Turing machine and allows building autonomic distributed systems where the computer and the computed interact with each other pushing the boundaries of Turing machines. We argue that the DIME is analogous to an O-Machine introduced by Turing in his thesis [8,9] and the DIME network architecture provides a model for a distributed recursive computing engine that allows replication, repair, recombination and reconfiguration of computing elements to implement dynamic self-managing distributed systems. In Sections 4 and 5, we discuss the impact of DNA on distributed systems design with visibility and control of the observer (the computation that is managing resources) and the observed (the computed). In Section 6, we conclude with some observations on injecting cognition into computing.
An autonomous system is typically considered to be a self-determining system, as distinguished from a system whose behavior is explicitly externally engineered and controlled. The concept of autonomy (and autonomous systems) is, therefore, crucial to understanding cognitive systems. According to Maturana [10,11] a cognitive system is a system whose organization defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain. If a living system enters into a cognitive interaction, its internal state is changed in a manner relevant to its maintenance, and it enters into a new interaction without loss of its identity. A cognitive system becomes an observer through recursively generating representations of its interactions, and by interacting with several representations simultaneously it generates relations with the representations of which it can then interact and repeat this process recursively, thus remaining in a domain of interactions always larger than that of the representations. In addition, it becomes self-conscious through self-observation; by making descriptions of itself (representations), and by interacting with the help of its descriptions it can describe itself describing itself, in an endless recursive process.
According to Evan Thompson [
These observations lead us to conclude that self-management is an outcome of cognitive abilities of a system with the following defining attributes of cognitive systems:
1) A self-identity that does not change when a state change occurs with interaction;
2) A domains of interaction;
3) A cognitive interaction process support that allows an observer to generate recursively repre-sentations of its interactions. The observer by interacting with several representations simultaneously, generates relations with the representations of which it can then interact and repeat this process recursively, thus remaining in a domain of interactions always larger than that of the representations, and 4) Co-emergence In the next section we will discuss the Turing O-machine and argue that it is more suitable to simulate the cognitive activity and such a simulation transcends the mere book-keeping capabilities of a Turing machine.
Extending the three mutually exclusive positions discerned by Johnson-Laird [
1) The human brain (or, variously, mind or mindbrain) is a computer, equivalent to some Turing machine;
2) The activity of a human brain can be simulated perfectly by a Turing machine but the brain is not itself a computing machine;
3) The brain’s cognitive activity cannot in its entirety be simulated by a computing machine: a complete account of cognition will need “to rely on non-computable procedures”;
4) The brain is what Turing called an O-machine; and 5) The cognitive activity of the brain can be simulated perfectly by an O-machine, but the brain is not itself an O-machine; such simulation cannot be effected by a Turing machine.
In this paper we argue that the DIME network architecture introduced to inject architectural resiliency in distributed computing systems [14,15] supports the fifth alternative introduced by Copeland.
The Turing machine is an abstract model that uses an instruction cycle {read à compute (change state) à write} to replace a man in the process of computing a real number (using a paper and pencil) by a machine which is only capable of finite number of conditions. In modern terms, a program provides a description of the Turing machine and the stored program control implementation in some hardware allows its execution. A universal Turing machine is also a Turing machine but with the ability to simulate a sequence of synchronous Turing machines each executing its own description. This allows a sequence of programs to model and execute a description of the physical world as Cockshott et al. [
Turing himself discussed the mathematical objection to his view that machines could think [16,17]. In reply to the objection, he proposed designing computers that could learn or discover new instructions, overcoming the limitations imposed by Gödel’s results in the same way that human mathematicians presumably do. He also pointed out [
In this paper we argue that the DIME network architecture recently introduced [
The DIME network architecture concerns itself with process work-flows that contain the descriptions to execute and regulate the tasks described to accomplish an intent. When the process is initiated by an external agent at t = 0, the whole and the parts act as an integrated system to accomplish the intent with the given descriptions of both the task executions and their regulation.
In its simplest form a DIME is comprised of a policy manager (determining the fault, configuration, accounting, performance, and security aspects often denoted by FCAPS); a computing element called MICE (Managed Intelligent Computing Element); and two communication channels. The FCAPS elements of the DIME provide setup, monitoring, analysis and reconfiguration based on workload variations, system priorities based on policies and latency constraints. They are interconnected and controlled using a signaling channel which overlays a computing channel that provides I/O connections to the MICE) [
In this model, the controlled computing element (the MICE) acts as a conventional Turing machine and the FCAPS managers act as the Oracles.
There are three key modifications to the Turing machine which provide the abstractions required to provide the cognitive system attributes identified in this paper:
1) The “read -> compute -> write” instruction cycle of the Turing machine is modified to “interact with external agent -> read -> compute -> interact with external agent -> write” instruction cycle which allows the external agent to influence the further evolution of computation 2) The external agent consists of a set of parallel managers monitoring and controlling the evolution of the computation based on the context, constraints and available resources. The context, constraints and control op-
tions are specified as a meta-model of the algorithm under computation. The context refers to local resource utilization and the computational state of progress obtained through the interaction with the Turing machine. Each DIME contains two parts; the service regulator (SR) that specifies the algorithm context, constraints, communication abstractions and control commands which are used to monitor and control the algorithm execution at run-time; and the algorithm executable module (SP) that can be loaded and run in the DIME.
3) In addition to read/write communication of the Turing machine, the managers communicate with external agents using a parallel signaling channel. This allows the external agents to influence the computation in progress based on the context and constraints just as an Oracle is expected to do. The external agent itself could be another DIME in which case, changes in one computing element could influence the evolution of another computing element at run time without halting its Turing machine executing the algorithm.
The separation of computing and its management at the DIME is extended and scaled to become a two layer DIME network. The DIME network thus provides a regulatory (or signaling) network overlay over the computing network. The DIME network [
1) Nodes that encapsulate the managed intelligent computing element, MICE, with self-management of fault, configuration, accounting, performance and security (FCAPS);
2) Message-based communications (loose coupling);
3) Channels for intraand inter-DIME communication and control; parallel and isolated channels for signaling (FCAPS management) and data (information) exchange;
4) Support for distributed recursive processes that, at some level, contain services that execute a set of tasks.
A generic structure model for the DIME network using the π-calculus recursive operation [24,25] is shown in
Let C, D, M and R represent a set of communication channels, DIME, Regulator and MICE nodes respectively.
where a Dime node, di is a set of concurrent processes, and and and represent a set of channels and Mice; the two sets of communication channels of
where “!” is the π-calculus recursion operator, “|” represents concurrency, r0 represents the initial/root Regulator (at start-up); [···] represents option, and {···} represents a set.
Thus, from the above we know that a DIME network consists of an initial (start-up) Regulator (the root regulator, r0 that may be connected through a set of communication channels and operate concurrently with a DIME network. We can visualize the DIME network from some node, di, created in the ith iteration as:
where represent the ancestors and the descendants.
A DIME can abstract a network of DIMEs thus providing an FCAPS managed DIME composition scheme. This allows us to implement both hierarchical and temporal event flows constituting the business processes. It is easy to see that the DIME’s self-identity, self-management, recursive network composition scheme to implement managed network of computing elements and the dynamic control offered by the signaling channel to configure and reconfigure DIME networks provide a powerful mechanism to implement the process flows required to support cognitive process in computing systems.