Journal of Modern Physics
Vol.09 No.03(2018), Article ID:82273,14 pages
10.4236/jmp.2018.93024

Renormalization of Hierarchy and Semantic Computing

Maria K. Koleva

Institute of Catalysis, Bulgarian Academy of Sciences, Sofia, Bulgaria

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 3, 2018; Accepted: February 2, 2018; Published: February 5, 2018

ABSTRACT

In the setting of boundedness, the renormalization acquires new meaning and implementation: it appears as generic operational protocol available for intelligent complex systems aimed towards essential non-extensive reduction of computation costs and non-extensive speeding up of computing. Another advantage of the proposed above renormalization is that, along with reduction of computation costs and speeding up of computing, it allows further hierarchical super-structuring where the same properties of speeding up the computing and reduction of computation costs hold. The fundamental novelty of that renormalization is provided by a highly non-trivial interplay between structural and functional properties.

Keywords:

renormalization, Boundedness, Hierarchy, decomposition theorem, semantic Computing, Markov Process

1. Introduction

Nowadays the scientific community faces the major problem whether the so called Moore’s Law is only a technical problem of immense importance or it has fundamental aspects as well. The Moore’s Law asserts that the storage capacity achieved by each and every existing technology would eventually reach a plateau. So far, the scientific community sees the best strategy for overcoming this problem to be reaching consensus about what is the best technology.

I assert that the problems posed by Moore’s Law have fundamental aspect as well because whatever the best technology would be, the execution of the corresponding computing would be provided by extensive means only. Indeed, according to the traditional information theory, the extensivity of the computing commences from the reducibility of each and every algorithm to a number of arithmetic operations, executed by means of linear processes only. Thus, the more complex is the algorithm the more operations it involves and in turn, the more hardware elements (and/or time) are necessary to accomplish the task; thereby the future development of the computing performance would be reduced to the matter of compromise between the speed of operations and energy and production costs.

That is why I comprehend the major move ahead not as the best choice among the variety of contemplated future and emerging technologies but in establishing grounding principles of a next generation performance strategy that opens the door to realization of a functional circuit capable to autonomous creation and comprehension of information. Recently, I put forward new kind of information theory [1] where I have introduced the so called semantic computing. Its exclusive property is non-extensivity of informational organization and its processing. The major goal of the present paper is to demonstrate that the non-extensive hierarchy of the semantic computing is able to provide a way to substantial reduction of computation costs and considerable speeding up of the computation. The fundamental reduction of production costs and speeding up the computing is implemented by means of specially built for the purpose renormalization procedure.

Now I briefly demonstrate that the traditional algorithmic theory cannot provide application of the traditional renormalization to the process of computing. Indeed, the traditional algorithmic theory sets governing role of the logical operations over the physical properties of the hardware. The major implement for achieving this goal is hand-craft resetting of the hardware to a “neutral” state so that to maintain the specific probability for appearance of each information states (symbols) to be the same under the execution of each logical operation. It is worth noting that the reset to “neutrality” is necessary condition for execution of all numerical operations as linear Markovian processes. Then the expectation is that, being a Markov process, the algorithmic computing is available to traditional renormalization viewed as iterative procedure of coarse-graining followed by decimation which eventually would result in a steady state. In turn, it would to be expected that a multiple application of the operation of coarse-graining would result in a substantial reduction of computation costs and computation time. However, the latter procedure is compromised by the execution of the command “IF” which renders current memory radius to be dependent on the current input. In turn, there is not a single well-defined memory radius along a sequence that represents any given algorithm. In turn, the lack of single-sized memory radius corrupts the procedures of coarse-graining and that of decimation and renders them ill-defined. Thus, the process of renormalization turns ambiguous because it does not converge to a single well-defined steady state where the effect of the execution of the command “IF” to be separable from the renormalization. As a result, no algorithmic process can be renormalized and so this road to reduction of computation costs yields an impasse.

The major goal of the present paper is to demonstrate that the semantic computing is subject to exclusive for the boundedness renormalization procedure which is not only non-ambiguous but also results in substantial reduction of computation costs accompanied by an essential speeding up of computing. Another distinctive property of that renormalization is that the above properties hold for each and every environment let alone the latter is bounded. The latter property is in sharp contrast with the traditional renormalization procedure which holds only when control parameters are fine-tuned to certain specific value(s).

The organization of the paper is following: the basic principles of semantic computing are considered in the next section; the basic building blocks of the new type of renormalization are presented in section 3.

2. Semantic Computing

An exclusive for the semantic intelligence and its implement semantic computing property is that they operate in a non-specified ever-changing environment let alone the latter is bounded. To compare, the traditional algorithmic theory and the traditional computing operates in an artificially designed environment.

Semantic intelligence and its implement semantic computing naturally arise in the setting of the concept of boundedness where they appear as a response of an intelligent complex system to an ever-changing environment.

The concept of boundedness is a new explanatory paradigm aimed towards explanation of the behavior of the complex systems behavior. A systematic study can be found in [1] . In the setting of this paradigm the semantic computing is implemented by a specific for each intelligent complex system hierarchical self-organization of the physical processes.

The goal of this section is to highlight the major properties of semantic intelligence and semantic computing that naturally arise under the concept of boundedness and to demonstrate their relation to the general characteristics of physical processes which implement it. This defines the strategic goal to be establishing the grounding principles of next generation approach for building a circuit able to exhibit semantic intelligence.

Since the semantic computing naturally arises in the frame of concept of boundedness, a concept which is very new, the most general assumptions of the concept of boundedness are presented in the next sub-section. The necessary conditions for setting semantic computing are presented in the following sub-section. This is made because the intelligent complex systems are a sub-class of the family of complex systems. As such they share all properties of the wider class of complex systems along with specific for the semantic intelligence properties.

The description of the basic principles of the concept of boundedness follows in its major part the corresponding presentation in [2] .

2.1. Concept of Boundedness

Before presenting the basic principles of the concept of boundedness let me remind the notion of a complex system and the major characteristics of their behavior. Let me start with the notion of a complex system: it is how parts of a system are organized so that the system behaves as a single object and how it interacts with its environment. Thus the notion of a complex system encompasses an enormous variety of systems ranging from physical ones such as quasar pulsations, to biological such as DNA sequences, to social ones such as financial time series.

The intensive empirical examination that was going on in the last decades displays the remarkable enigma of complex systems behavior: the highly specific for each complex system properties persistently coexist with certain universal, shared by each of them ones. Thus on the one hand, they all share the same characteristics such as power law distribution and sensitivity to environmental variations, for example; on the other hand, each system has its unique “face”, and i.e. one can distinguish between an earthquake and heartbeat of a mammal. What makes the study of this coexistence so important is the enormous diversity of systems where it has been established. In order to get an idea about this vast ubiquity let us present a brief list of such phenomena: earthquakes, traffic noise, heartbeat of mammals, public opinion, currency exchange rate, electrical current, chemical reactions, weather, ant colonies, DNA sequences, telecommunications, etc.

But the greatest mystery enshrined in the behavior of complex systems is that both intelligent and non-intelligent systems belong to the same class: thus a Beethoven symphony, a product of a genius mind, and the traffic noise which, though being also a product of human activity, but un-intelligent in its behavior, share the same type of power spectrum. Another example is the semantics of human languages: in the year 1935 the linguist G. K. Zipf established that, given some corpus of natural languages, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice often as the second most frequent word, three times as often as the third most frequent word, etc. Thus, the Zipf Law ignores any semantic meaning and thus it seems to sweep out the difference between mind activity and random sequences of letters. Thus we come to the following fundamental problem: what makes a complex system “intelligent” and why it should share such “indifferent” to the intelligence properties?

The affiliation of the apparently intelligent systems such as human languages and music to the same class as earthquakes and a variety of other natural phenomena, suggests in a straightforward way that the intelligent behavior is embedded in natural processes. Thus the opposition between intelligence as highly specific activity and the fact that it inherently belongs to a universal class of natural phenomena raises the major question whether it is likely to expect defining a criterion able to distinguish the specificity of a system from another one along with affiliating each of them to the same class of complex systems.

Next the grounding assumptions of the concept of boundedness are introduced. They are:

・ A complex system remains stable if and only if the rate and amplitude of variations that it exerts in response to an ever-changing environment are bounded to specific for the system margins.

・ The response is local and not pre-determined. This implies that it depends on the current state of a system and the current environment impact. Yet, in order to sustain boundedness of rates it behaves in a non-linear and non-homogeneous way.

・ Complex systems are self-organized in a hierarchy of responses so that different hierarchical levels are linked through inter-level feedbacks. An exclusive property of the inter-level feedbacks is that each of them operates as a bounded irregular environment for the self-organization at each and every hierarchical level. In turn, this provides bi-directionality of the hierarchy as it goes both bottoms up and top down. This is in sharp contrast with the traditional approach where different objects are considered closed systems and the hierarchy goes only bottom up.

A crucially important for the entire theory fact is that bounded time series acquire much broader understanding in the setting of boundedness because their major properties are subject to a different theorem than major statistical theorems such as the Law of Large Numbers and the Central Limit Theorem. To make this statement clear let me remind that in order to meet boundedness local rules change in a non-homogeneous way from one trial to another. As a consequence, different events cannot be considered as representable by independent random variables. However, the requirement that the succession of events at different trials is represented by a succession of random independent variables is a necessary constraint for holding the Law of Large Numbers and the Central Limit Theorem. Yet, the fundamental advantage of the major theorem of boundedness, called hereafter decomposition theorem, is that it is free from the constraint about independency of the variables; it holds for arbitrary variables let alone the latter are bounded. Thus, the decomposition theorem appears as a counterpart to the Central Limit Theorem. Indeed, they are defined for different subjects: while the decomposition theorem is about bounded but yet dependent variables, the subject of the Central Limit theorem is random independent yet unbounded variables.

The central for the entire theory of boundedness decomposition theorem is grounded on the general statistical theorem of Lindeberg [3] . The Lindeberg theorem states that every bounded sequence has finite mean and finite variance irrespectively of its distribution. On the grounds of Lindeberg theorem I have proposed the decomposition theorem [1] which proves that there exists a presentation basis where the response of each and every complex system decomposes into two parts, specific and universal one, each of which has characteristics that are robust to the details of the variations in any time series if only the latter is bounded. The rigorous assertion is that the power spectrum of each time series that represents the behavior of a complex system is decomposable into two parts: a specific discrete band (called homeostatic pattern) and a continuous band whose shape is universal. Further, the claim is that the decomposition is additive and it happens with constant in the time accuracy. It is worth noting that these results hold irrespectively to the characteristics of any time series let alone the latter is bounded. In addition to these two parts in the power spectrum, a non-recursive component persists whose commence comes from the highly non-trivial interplay between the specific and the universal part.

The greatest value of that decomposition is:

1) It allows unambiguous separation of an object from its environment. The notion of an object consists of specific pattern, called homeostasis, because its characteristics remain intact in an ever-changing environment. Moreover, one can define them regardless to the details of that environment.

2) Though the exact local behavior remains unpredictable, it provides predictability of the behavior of each complex system up to the predictability of its current homeostatic pattern.

3) The persistence presence of a non-recursive component constitutes the non-recursive way of relating structural and functional properties. Indeed, while structural properties are defined through the specific pattern whose power spectrum is given by the discrete band, the functional properties are presented through the entire power spectrum, i.e. through the interplay among specific, noise and non-recursive component altogether.

2.2. Semantic Intelligence as a Subclass of Complex Systems Behavior

Next the necessary conditions for realization of any piece of semantic intelligence and its implement semantic computing are considered. Again the presentation follows the corresponding presentation in [2] . It should be stressed that the class of intelligent complex systems is a sub-class of complex systems. Thus, all complex systems are subject to boundedness but the intelligent complex systems are subjected to an additional constraint which will be presented next.

An exclusive property of any complex system considered in the frame of boundedness is that its state space is partitioned into domains so that each domain is characterized by a specific for it homeostatic pattern; the rate of intra-domain motion is bounded and irregular. A crucial for the idea of semantic intelligence property is that the motion among domains is restricted to the adjacent domains only. Note that this property is exclusive for the boundedness one and is an immediate consequence of the boundedness of rates which does not allow arbitrary large jumps. To compare, the traditional statistical mechanics puts forward the Markovianity of the jumps between states as primary property thus allowing jumps of arbitrary size.

The general necessary constraint which provides implementation of the semantic intelligence by physical processes consists of requirement that the state space of any intelligent complex system is to be partitioned into at least 4 domains. The reason behind the particular choice of no less than 4 domains could be found in [1] .

The utilization of this constraint is two-fold:

・ The first assumption is to associate information symbols with intra-domain homeostatic patterns. The far-going consequence of that association consists of providing algorithmic uniqueness of each and every specific law. The algorithmic uniqueness is achieved through the ubiquitous presence of a non-recursive component additional to the homeostatic and noise characteristics. To remind, a “non-recursive” implies that neither specific law can be achieved by finite number of steps organized in a traditional algorithm. The non-recursive component comes from a highly non-trivial interplay between the homeostatic pattern and inter-level feedbacks for every intelligent complex system. In turn it makes different specific laws unique and algorithmically unreachable from one another. It should be stressed that the comparison with the traditional explanatory paradigm displays a sharp contrast. Indeed, the traditional explanatory approach assumes existence of a universal law such that all specific laws are its algorithmic derivatives.

It is worth noting that the association of the notion of information symbol with the notion of homeostatic pattern renders the executed intelligence stable and robust to environmental changes, a property exclusively provided by the decomposition theorem where each homeostatic pattern turns robust to environmental fluctuations let alone the latter are bounded. To compare, the execution of any piece of algorithmic intelligence requires specific steady environment which must be artificially provided.

・ The second assumption consists of association of the meaning of each semantic unit with the performance of a specific non-mechanical engine built on the corresponding inter-domain orbit in the state space. As an example of non-mechanical engines may be considered the bio-chemical cycles in living organisms. The exclusive property of the presentation of the meaning of a semantic unit through the performance of an engine is that the latter provides sensitivity to permutations of the semantics. Indeed, like the semantics of human languages where the meaning of each word depends on the order of the letters in it, an engine performs differently if it operates in different directions. Thus, for example the famous Carnot engine operates in one direction as a pump and in the opposite direction it performs as refrigerator.

Thus, each semantic response consists of hierarchy of cycles which follow through adjacent states only irrespectively to the intensity of the current environmental impact. In turn, this provides the most fundamental property of the semantic intelligence to be the autonomous creation and comprehension of information.

Outlining, there are two ways of presenting a semantic unit: the first one is through the sequence of information symbols which it consists of while the second one is through performance of a specific engine built on the cycle consisting of the above sequence of information symbols. This two-fold presentation of every semantic unit is the major implement for non-extensivity and bi-directionality of the hierarchy of every semantic response. It should be stressed that both presentations are algorithmically unreachable from one another.

A fundamental advantage of the semantic computing is that the hierarchical organization of semantic cycles provides stable and robust to environmental changes computing. Another advantage of this setting is that at the same time it opens the door to essential speeding up of computing and essential reduction of the computation costs. However, the question is whether it happens in an unambiguous way. The task of the next section is to demonstrate that the semantic computing is subject to special conditions imposed onto renormalization which guarantees that the reduction of the computation costs and speeding up of the computing happen in an unambiguous way.

3. Renormalization under Boundedness

Nowadays the notion of renormalization acquires wider meaning: it is associated with any iterative procedure of coarse-graining followed by decimation at each and every iteration. However, the problem of non-ambiguous convergence to a single state is left open. One of the most important cases is that of Markovian process. It has been proven that a Markovian process converges to a steady state called equilibrium but the question whether this happens in an unambiguous way is still open. Some solved cases are phase transitions and mass renormalization in the quantum field theory, problems solved by K. Wilson in [4] . But in the general case, there is no established route to unambiguous convergence to a single state.

The semantic computing is a special case of renormalization since the hierarchy of semantic cycles creates long-range correlations which are specific for each and every case and they change from one realization to another. The circumstances become more complicated since different hierarchical levels “interact” through inter-level feedbacks. At first glance the situation seems to be the same as with the algorithmic computing considered in the Introduction because, both here and there, specific correlations whose size depends on the concrete realization persist. However, there is a difference: under boundedness the correlations are subject to the constraint of boundedness: their presence should be consistent with the boundedness of rates and amplitudes at each and every spatial point and at any time. Therefore, the question is whether the central for boundedness decomposition theorem is powerful enough to ensure an unambiguous renormalization.

Next it is demonstrated that indeed, the additive decomposition to specific and universal correlations is a necessary prerequisite to ensuring unambiguous procedure of renormalization. Moreover, this renormalization provides grounds for further hierarchical super-structuring of the same semantic-like type.

Let me start the proof with the assumption that there exists renormalization-like protocol which leaves the hierarchy of the same semantic type. The major move ahead is that I consider the renormalization as an operational protocol where the coarse-graining and decimation have exclusive implementations. The assumption that the renormalization is an operational protocol is grounded on the fact that the laws on different hierarchical levels are different and algorithmically un-reachable from one another. This assumption constitutes the major difference with the traditional notion of renormalization viewed as an iterative procedure of coarse-graining followed by decimation which leaves the laws scale-invariant. This fact suggests that the renormalization in the setting of boundedness needs special implements for the coarse-graining and for the decimation so that to provide different laws at different hierarchical level and to meet their algorithmic un-reachability as well. To the most surprise, as it will be demonstrated next, the specially built-up operations of coarse-graining and decimation retain major distinctive characteristics of their traditional counterparts which make me to entitle the proposed here procedure renormalization. Yet, there are differences and they will be revealed in the course of the consideration.

I suppose that the coarse-graining is implemented by integration over physical correlations persisting at the power spectrum of any time series at each hierarchical level. Thus the coarse-graining results in the variances of correlations which come from both homeostatic pattern and universal band. It should be stressed that a power spectrum comprises only steady correlations, the accidental correlations drop out according to the definition of the notion of power spectrum. To remind, the power spectrum is the Fourier transform of the autocorrelation function where the accidental random correlations are averaged out and so drop out and thus only steady correlations survive.

The highly non-trivial point is that power spectrum involves not only physical correlations, i.e. correlations which commence from physical processes, but also it comprises correlations that are not of physical origin but commence from the mere fact of boundedness. Next I prove that the contribution of the non-physical correlations is ignorant for the renormalization.

The next step is to demonstrate that the frequency domains of both specific and universal components of the physical correlations are bounded at each and every hierarchical level. For this purpose I shall use the fact that the metrics on each and every hierarchical level is Euclidean. Let me start with reminding that the Euclidian metrics is strongly grounded on the notion of the nearest neighbor. A necessary condition for existing of metrics is the possibility for single-scale Voronoi tessellation of the corresponding space; and when each and every Voronoi cell comprises the current nearest neighbors, the metrics is locally Euclidian. To remind that Voronoi tessellation implies a single-scale partitioning of the space into cells so that the latter to be densely packed. Taking into account that every path in the state space under boundedness is realized on a latticized subset (whose details vary from one sample lattice to another), each latticized subset is equivalent to a sample of Voronoi tessellation. Keeping in mind that each cell in every sample of this Voronoi tessellation comprises the current nearest neighbors, we conclude that indeed a state space under boundedness retains metrics and that this metrics is locally Euclidian. It is worth noting once again that the metric is locally Euclidian because the property that each Voronoi cell comprises the current nearest neighbors is exclusive property of the boundedness. Put it briefly, the proof asserts that a set of lattices each of which is equivalent to the same scale Voronoi tessellation, constitutes a continuous space with locally Euclidian metrics. And any particular path selects its unique latticised subset on which it moves.

As a result of the Euclideanity of the local metrics there exist specific frequency domains for the specific and universal components of the power spectrum at each and every hierarchical level. Their boundaries are given by the corresponding margins of variations set by the boundedness. It is worth noting that the frequency domain of each and every hierarchical level is specific for it. It is worth noting that though the infra-red cut-off of the universal components is set by the length of the corresponding time series, its contribution to the correlations is alias since the correlations which it comprises are of mathematical origin; the physical correlations are limited by the boundedness of amplitudes and rates. Moreover, the boundedness of physical correlations renders in the highly non-trivial result that the effective contribution of non-physical correlations to the corresponding variance is robust the length of the time series and is insensitive to the details of the any given realization.

Thus, the coarse-graining at each hierarchical level results in the sum of variances of specific and universal correlations because of the exclusive properties of the specific and universal components, namely: their additivity and robustness of each of them to the details of the variations in any given time series and thus insensitivity to any concrete realization of the response. It is worth noting that the additive decomposition of a power spectrum to specific and universal correlations makes their “interaction” impossible [1] [5] . Thus, this lack of interaction renders ban over transformation of specific correlations into universal ones and vice versa. In turn this ban constitutes a ban over acquiring information out of noise alone. I called this ban a ban over information perpetuum mobile [5] . And as it become evident now the role of the additive decomposition is multi-purpose: it provides not only unambiguity of causality of the correlations embedded in each specific pattern but it also serves as implement for providing invariance of that unambiguity under the operation of coarse-graining viewed as an ingredient of the derived here renormalization protocol.

The decisive constraint for establishing unambiguous procedure of decimation is the requirement that the frequency domains of different hierarchical levels not overlap. The non-overlapping of the frequency domains is necessary for avoiding interference of other hierarchical levels with the homeostatic pattern of any given one. Further, provided the independence of homeostatic patterns on different hierarchical levels, the result of coarse-graining on each hierarchical level automatically turns independent from the details of any concrete realization of semantic computing. In result, different hierarchical levels will compute autonomously. In consequence of that autonomy details of lower level computations are available to be stored in a memory and thus only higher level operations to be processed in a given case. Thus, the above “shortcutting” of the lower level computing plays the role of “decimation”.

Let me stress on the fact that in the setting of boundedness the procedures of coarse-graining and decimation are intertwined by the condition for non-overlapping of frequency domains while they are independently defined in the traditional notion of renormalization.

Outlining, though the realization of the grounding requirement about non-overlapping requires a lot of ingenuity to be built-up artificially, its pay-off comes as exclusive for the semantic computing renormalization procedure which provides the major property of semantic computing to be the autonomous computing on different hierarchical level. In turn, this autonomy is the implement for essential non-extensive reduction of computation costs and essential non-extensive speeding up of computing. The non-extensivity of semantic computing delineates its fundamental difference to the traditional algorithmic computation where the logical operations are executed by linear processes only which bound any speeding up of computing to be extensive and any reduction of computation costs to be subject to ingenious specific shortening of any concrete algorithm made by an external agent (human mind).

In a nutshell, the building up of the special type renormalization involves the following sequence of steps: boundedness of amplitudes and rates provides additive decomposition of the power spectrum to a specific and continuous band so that their characteristics are robust to the variations in any given time series. Along with that the boundedness provides that metrics to be always locally Euclidean. In consequence of the non-trivial interplay between these circumstances the frequency domain of all physical correlations at each and every hierarchical level is bounded within specific margins. This along with the requisite for the non-overlapping of the frequency domains of different hierarchical level provide that the coarse-graining (viewed as integration over the power spectrum at the corresponding level) to result in the sum of variances of the specific and universal correlations. It is worth noting that this provides unambiguity of the process of coarse-graining. Next in this line comes “decimation”, i.e. storing the lower level information into memory and operating only on higher hierarchical levels. Thus, the major goal of the paper is achieved: the new type of renormalization procedure provides an essential speeding up of computing and essential reduction of the computation cost in an unambiguous way.

Next let me briefly outline the differences of the proposed above renormalization and the already existing models. Let me start with the renormalization proposed by Wilson [4] . The first fundamental difference is that the latter operates with an infinite frequency domain for physical correlations. The convergence to a fixed point is a result of supposition that the interactions are local and remain local after the coarse-graining. Further, it is assumed that the laws which yield long-range correlations are scaling invariant. As a result, the decimation is considered as operation which sustains the scale-invariance and thus after each iteration the laws remain the same. The irrelevance of that procedure to the semantic computing lies in the following: 1) according to boundedness the laws on different hierarchical levels are different, moreover algorithmically un-reachable from one another; thus they are not scaling invariant; 2) according to the Wilson, the considered systems are embedded in steady environment governed by control parameters; thus, phase transitions happen at specific concrete value of the control parameters (e.g. temperature for Ising model). Unlike, the semantic computation operates in each and every unspecified ever-changing environment let alone the latter is bounded.

It is worth noting that, though in the setting of boundedness the laws on different hierarchical levels are different and algorithmically non-reachable from one another and thus they are not scaling invariant, the form of governing equation where these laws commences from, at the state space of each and every hierarchical level remains invariant. This matter is discussed in Chapter 7 of [1] . The exclusive advantage of the invariance of the form of governing equation is that it provides the invariance of the structure of the state space on each and every hierarchical level in the sense that state space at each hierarchical level is partitioned into specific for it domains-of-attraction. The outcome of this fact is that it provides the not only availability of further hierarchical super-structuring by adding levels but it provides the necessary conditions for the renormalization to hold for the newly built hierarchical levels as well. To remind, these conditions are: decomposition of the power spectrum to a discrete and continuous bands with robust to the details of variations properties and maintenance of the metrics to be locally Euclidean. Outlining, the above listed differences result in the fact that the proposed here renormalization operates in every environment if only the latter is bounded, and it does not require fine-tuning in control parameter space as in the case of traditional theory.

The case of Markovian process differs from the boundedness in its very core since the traditional probability approach is inconsistent with the decomposition theorem. To remind, the general constraint to a Markov process is that it must be representable as a sequence of transitions among independent random events. This constraint however, does not hold in the setting of boundedness where the probabilities for the states and transition rates change non-homogeneously in order to meet boundedness of amplitudes and rates in each and every locality and thus they their mutual independence is not provided. On the other hand, the decomposition theorem is valid for arbitrary sequence of events let alone the latter are bounded.

Outlining, the renormalization under boundedness is a generic operational protocol available for intelligent complex systems. Its major advantage is that it provides essential reduction of computation costs and speeding up of computing in an unambiguous way. The fundamental novelty of the latter is provided by a highly non-trivial interplay between structural and functional properties aimed towards providing the necessary conditions for the availability of renormalization. It is worth noting that there is other advantage of the proposed above renormalization: along with the reduction of the computation costs and speeding up of computing, it allows further hierarchical super-structuring where the same properties hold.

4. Conclusions

The renormalization under boundedness appears as a generic operational protocol available for intelligent complex systems subject to the constraint of building their hierarchical structure so that to provide autonomous operation at different hierarchical levels. In turn, the autonomy of operation of different hierarchical levels makes possible the details of lower level computations to be stored in a memory and thus only higher level operations to be processed in a given case. A very important property of that reduction of the computational costs is that it happens in an unambiguous way and in an ever-changing environment.

An exclusive for the renormalization under boundedness property is that it appears as an operational protocol not an iteration procedure of coarse-graining followed by decimation (as it has been originally introduced). This is provided by the fact that the laws which operate at different hierarchical levels are not only different but algorithmically unreachable from one another as well. This renders the exclusive for the semantic computation property to be “computation” of algorithmically unreachable laws in a specific well-defined computation time and at specific well-defined costs. It is worth noting that the physical “distance” between algorithmically un-reachable states is always finite and is measured by the characteristics of the self-organized pattern which represents the corresponding homeostasis and thus always comprises only bounded amount of matter/energy/information. On the other hand, the measure of algorithmic un-reachability is the exclusive presence of non-recursive component which persists along with each and every homeostatic pattern. The persistent presence of non-recursive component in a power spectrum is an exclusive for bounded series property since for their unbounded counterparts any such component is shifted to far-infrared infinity.

Outlining, the semantic computing is not only very effective with respect to speed and costs but it is much more powerful since it provides computing of algorithmically unreachable objects by well-defined means. To compare: the traditional information approach cannot provide any computing of algorithmically unreachable objects by well-defined means because of the halting problem. An immediate consequence of that fact is that semantic computing never halts while its algorithmic counterpart not only halts sporadically but it is apriori unknown which algorithm would halt.

What makes the renormalization under boundedness so promising is that the same constraints provide not only essential reduction of computational costs and speeding up the computing but further hierarchical super-structuring so that the major properties of the semantic computing hold at newly added levels. This in turn opens the door to more efficient organization of the semantic intelligence.

Outlining, the major future goals become intertwined: whilst the traditional information theory is in quest for the fastest and the smallest computers, the semantic computing tries to find for the most efficiently organized ones. Yet, both tasks are rather counterparts than opponents since though the traditional hardware is able to execute every algorithm, it cannot create and comprehend information autonomously. On the other hand, though the sematic hardware is constrained by the need of a specific match to software, the semantic intelligence is able to create and comprehend information in an autonomous way at an essential non-extensive reduction of computation costs and an essential speeding up of computing.

Cite this paper

Koleva, M.K. (2018) Renormalization of Hierarchy and Semantic Computing. Journal of Modern Physics, 9, 335-348. https://doi.org/10.4236/jmp.2018.93024

References

  1. 1. Koleva, M.K. (2012) Boundedness and Self-Organized Semantics: Theory and Applications. IGI-Global, Hershey, PA.

  2. 2. Koleva, M.K. (2017) Semantic Intelligence. In: Khosrow-Pour, M., Ed., Encyclopedia of Information Science and Technology, 4th Edition, IGI-Global, Hershey, PA.

  3. 3. Feller, W. (1970) An Introduction to Probability Theory and Its Applications. John Willey & Sons, New York.

  4. 4. Wilson, K.G. and Kogut, J. (1974) Physics Reports, 12, 76-199.

  5. 5. Koleva, M.K. (2017) Journal of Modern Physics, 8, 299-314. https://doi.org/10.4236/jmp.2017.83019