Sociology Mind
2013. Vol.3, No.4, 268-277
Published Online October 2013 in SciRes (http://www.scirp.org/journal/sm) http://dx.doi.org/10.4236/sm.2013.34036
Copyright © 2013 SciRes.
268
Dynamic Knowledge—A Century of Evolution
Georg F. Weber
University of Cincinnati, Cincinnati, USA
Email: georg.weber@uc.edu
Received July 25th, 2013; revised August 24th, 2013; accepted September 4th, 2013
Copyright © 2013 Georg F. Weber. This is an open access article distributed under the Creative Commons At-
tribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the
original work is properly cited.
The discovery of non-linear systems dynamics has impacted concepts of knowledge to ascribe to it dy-
namic properties. It has expanded a development that finds its roots more than hundred years ago. Then,
certainty was sought in systems of scientific insight. Such absolute certainty was inevitably static as it
would be irrevocable once acquired. Although principal limits to the obtainability of knowledge were de-
fined by scientific and philosophical advances from the 1920s through the mid-twentieth century, the
knowledge accessible within those boundaries was considered certain, allowing detailed description and
prediction within the recognized limits. The trend shifted away from static theories of knowledge with the
discovery of the laws of nature underlying non-linear dynamics. The gnoseology of complex systems has
built on insights of non-periodic flow and emergent processes to explain the underpinnings of generation
and destruction of information and to unify deterministic and indeterministic descriptions of the world. It
has thus opened new opportunities for the discourse of doing research.
Keywords: Theory of Knowledge; Complexity; Information; Chance; Necessity
Introduction
The acquisition of certain, indisputable knowledge has been
a fundamental desire throughout the existence of mankind. For
this purpose, the basic rules of thought have been established in
the subject of logic, going back to Aristotle. The basic rules of
inquiry have been developed, mainly since the period of Enlight-
enment (but rooted as far back as Gallileo), in the field of
methodology. Advances in both areas have contributed to gains
in the content of knowledge through the demarcation of science
and the characterization of the scientific approach. In turn, both
areas have also shaped our concepts of the nature of knowl-
edge1,2. The interdependence and cross-fertilization between the
theory of science and scientific progress has arguably increased
in recent history. An investigation into the developments in
epistemology over more than a century displays three periods
of thought. They evolve from early attempts to define absolute
certainly through axiomatization (~1880-1920s) via discoveries
of insurmountable limits to the obtainability of knowledge
(~1920s-1960s) toward the description of rational inquiry as a
dynamic process that has its foundations in insights from non-
linear systems research (~1960s onward). This evolution was
initially driven by developments in the theory of knowledge,
which were then applied to the empirical sciences, but in the
second and third phases was increasingly shaped by progres-
sions in the sciences, which required reevaluations in the theory
of knowledge. Its outcome is characterized by a redefinition of
knowledge from a definitive and cumulative entity to a prob-
abilistic and evolving process.
Evolution of Knowledge
Absolute Certainty
It was the nineteenth century view that the world was a ma-
chine, which was fully predictable if all positions and momenta
of all its objects could be measured. The predominant scientific
philosophical foundation of the time was determinism. In this
environment, the mathematical schools around Hilbert and Frege
tried to make knowledge definitive through axiomatization. From
their basis, Russell developed analytic philosophy, a quasi-re-
ductionist approach that built on logic and mathematics to ana-
lyze specific problems. Russell’s philosophy was expanded by
the Vienna Circle to logical empiricism, which strove to obtain
definitive answers in the empirical sciences. The period is char-
acterized by an extension of the formal concepts devised for
generating certainty in mathematics via their applications in
logic and language to the empirical sciences with the goal of
making knowledge in these areas certain as well.
Meta-MathematicsHilbert
The axiomatic method in geometry consists of accepting,
without proof, certain propositions (axioms), from which all
other propositions are derived as theorems. Because mathemat-
ics studies strings of signs that have no inherent meaning, a
general method for testing internal consistency of the theorems
was devised in the conception of models, such that each propo-
sition is converted into a true statement about the model. This
1Quantum mechanics has precipitated profound gnoseological revisions. As
a case in point, the meaning of the term “Verstehen” (German for “under-
standing”) is discussed repeatedly in Heisenberg (1984), Chapters 3,10.
2Non-linear systems research has redefined the nature of chance and deter-
minism (Ruelle, 1991; Favre, et al., 1988). “Computing theory is spawning
ways of modeling complexity and disorder by describing information in
algorithmic forms. In this way, chaos is revealing fundamental limits to
human knowledge in an uncomfortable way.” (Hall, 1993: Introduction).
G. F. WEBER
Copyright © 2013 SciRes. 269
approach has limitations. The interpretation of axioms by mod-
els composed of an infinite number of elements makes it im-
possible to encompass the models in a finite number of obser-
vations. Also, the question of consistency of the axiomatic
method in geometry may be deferred to a question about the
consistency of the model. This was the case for David Hilbert’s
translation of Euclidean axioms into algebraic truths, which
showed that if algebra is consistent so is the Euclidean system
of geometry. Therefore, the model method does not provide a
final answer to the problem it was designed to solve.
It was Hilbert’s declared goal to firmly root arithmetic, and
building on it the entirety of mathematics, in an axiomatic sys-
tem that should be provably free of contradictions (the “Hilbert
program” in the context of which he later developed the Hilbert
calculus3). He formulated his program with concrete methods
for solving the consistency problem. He promoted meta-mathe-
matics as a way of perfecting the axiomatic method via con-
structing mathematics on a solid and complete logical founda-
tion. Hilbert believed that in principle this could be done, by
showing that
All of mathematics follows from a correctly chosen finite
system of axioms;
There is an axiom system that is consistent, provable through
some means such as the epsilon calculus4.
Hilbert’s approach constituted the shift to the modern axio-
matic method, wherein axioms are not taken as self-evident
truths but as hypotheses to be tested. Geometry may refer to
objects, about which we have strong intuitions, but it is not
necessary to assign any explicit meaning to them because only
their defined relationships are subject to discussion. Hilbert thus
addressed the antinomies of naïve set theory and attempted to
preserve the entire classical mathematics and logic (without
losing Cantor’s set theory that had been shaken by the discov-
ery of paradoxes5).
Predicate Logic—Frege
In meta-mathematics, a finitistic procedure must show that
antinomies cannot be derived by stated rules of inference from
the axioms. If the derivation of a single antinomy from the
axioms is possible then any formula whatsoever is deductible.
Conversely, if there is at least one formula that cannot be de-
rived then the calculus is incomplete. Once consistency is es-
tablished, it is of interest whether an axiomatized system is
complete. To establish consistency and completeness, Gottlob
Frege strove to develop a universal language of pure reason, in
which “nothing is left to guesswork”. He attempted to arith-
metize every individual scientific method, so that the truth of
every scientific statement can be tested6. Universality was a
declared goal in the development of the formalism. It was an
idea previously conceived of, but not developed by Leibnitz.
Frege’s work was intended to fulfill the need of mathematics
for exact foundations and stringent axiomatic treatment. He
attempted to devise a science of reason, which formalizes con-
tent such that it can be logically evaluated. Frege’s “Be-
griffsschrift” in 1879 developed this axiomatic form of logic, a
second level predicate logic with a concept for identity, which
contained the core features of modern formal logic. Central to
Frege was the discussion of equivalence in content. He took it
that the statements used in mathematics are important only
because of the non-linguistic propositions (the “thoughts”) they
express. Mathematicians working in various languages work on
the same subject because their statements express the same
thoughts. According to this view, thoughts are the elements that
logically imply or contradict one another, that are true or false,
and that together constitute mathematical theories. Each thought
is about a determinate subject-matter, and makes a true or false
statement about that subject-matter. A question about the con-
sistency of a set of geometric axioms is a question about a spe-
cific set of thoughts. Because thoughts are determinately true or
false, and have a determinate subject-matter, it makes no sense
to talk about the “reinterpretation” of thoughts. From Frege’s
point of view, the kind of reinterpretation Hilbert engaged in
(assigning different meanings to specific words) can apply only
to statements and never to thoughts. Frege noted a difficulty
with Hilbert’s approach in the meaning of the term “axioms”. If
it means the elements for which issues of consistency and in-
dependence can arise, then it must refer to thoughts, whereas if
it means elements which are susceptible to multiple interpreta-
tions, then it must refer to statements. Frege distinguished his
work from the theories by Immanuel Kant, who had considered
arithmetic statements to be synthetic judgments a priori, and
John Stuart Mills, for whom arithmetic statements were general
laws of nature confirmed by experience. Bertrand Russell adopted
Frege’s predicate logic as his primary philosophical method,
which he thought could expose the underlying structure of phi-
losophical problems.
Frege’s contribution to logic is the development of a formal
language, and with it a formalism for proof, which makes him
one of the forefathers of analytic philosophy. In contrast to
Husserl’s 1891 book “Philosophie der Arithmetik”, which at-
tempted to show that the concept of the cardinal number is
derived from psychological acts of grouping objects and count-
ing them, Frege sought to show that mathematics and logic
have their own validity, independent of the judgments or mental
states of individual mathematicians and logicians (which were
the basis of arithmetic according to the “psychologism” of
Husserl’s philosophy).
Logicism and Analytic Philosophy—Frege/Russell
Richard Dedekind and Gottlob Frege laid the foundations for
the mathematical-philosophical program of logicism. This school
of thought in the philosophy of mathematics puts forth the the-
ory that mathematics is an extension of logic and therefore
some or all statements of mathematics are reducible to logic.
Bertrand Russell and Alfred North Whitehead championed this
theory. Like Frege, Russell and Whitehead attempted to show
that mathematics is reducible to fundamental logical principles.
3Hilbert strove to carry out final proofs with his formalism. With today’s
historical perspective, arguably, he succeeded in formalizing computation,
not deduction (see also Chaitin, 1999: Chapter I).
4The epsilon calculus is an extension of a formal language by the epsilon
operator, where the operator substitutes for quantifiers in that language as a
method leading to a proof of consistency for the extended formal language;
the epsilon operator and epsilon substitution method are typically applied to
a first-order predicate calculus, followed by a demonstration of consistency.
5Georg Cantor had developed a theory of infinite sets. The ordinal numbers
indicated positions on an infinite list, while the cardinal numbers measured
the size of infinite sets. He had shown that the set of all subsets of a given
set is always bigger than the set itself. Bertrand Russell recognized that the
set of all subsets of the universal set cannot be bigger than the universal set
itself. He identified a critical paradox in the set of all sets that are not mem-
bers of themselves—a condition impossible to satisfy.
6The Frege program went beyond Hilbert’s ambitions by expanding an
axiomatic approach beyond mathematics to a language for all sciences.
G. F. WEBER
Copyright © 2013 SciRes.
270
They collected evidence in support of the assertion of logicism
in their Principia Mathematica. However, logicism was brought
to a deep crisis with the discovery of the classical paradoxes of
set theory (by Cantor in 1896, by Zermelo and Russell in 1900-
1901). Frege gave up on the project after Russell communicated
his exposition of an inconsistency in naïve set theory (the “Rus-
sell antinomy”7). Nevertheless, Frege’s research had provided
the groundwork for others to develop the logicistic program.
Late nineteenth-century English philosophy was dominated
by British idealism8, as taught by philosophers such as Francis
Herbert Bradley and Thomas Hill Green. Against this intellec-
tual background, Bertrand Russell and George Edward Moore,
articulated their program of analytic philosophy, a basic princi-
ple of which is conceptual clarity. Inspired by the developments
in logic, specifically Frege’s predicate logic, Russell claimed
that the problems of philosophy can be solved by demonstrating
the simple constituents of complex notions. This approach dif-
fers from that of Locke, Berkeley, and Hume by its incorpora-
tion of mathematics and its development of a logical technique.
It is thus able to achieve definite answers to certain problems,
which have the quality of a science rather than of a philosophy.
Compared with the philosophies of system-builders, the quasi-
reductionist approach of analytic philosophy is able to tackle its
problems one at a time, instead of having to devise a theory of
the whole universe. Its methods, in this respect, resemble those
of the applied sciences. Russell had no doubt that, in so far as
philosophical knowledge was possible, it had to be sought by
such approaches which could make many long-standing prob-
lems completely solvable.
Logical Empiricism—The Vienna Circle
The Vienna Circle (“Der Wiener Kreis”) was a group of phi-
losophers, gathered around the University of Vienna in 1922,
that developed the formalisms of Bertrand Russell and Ludwig
Wittgenstein into the school of logical positivism (neopositiv-
ism, which later evolved into logical empiricism). Logical em-
piricism used formal logic to underpin an empiricist account for
our knowledge of the world (Hahn et al., 19299). Similar phi-
losophical concepts were pursued simultaneously by the Berlin
Circle (“Berliner Gruppe”, later “Berliner Gesellschaft für em-
pirische Philosophie”).
The Vienna Circle considered logic and mathematics to be
analytic in nature. Extending Wittgenstein’s insights about logi-
cal truths to mathematical ones, the Vienna Circle viewed both
as tautological. Like the true statements of logic, true state-
ments of mathematics did not express factual truths. Being de-
void of empirical content, they only concerned ways of repre-
senting the world by spelling out implicit relations between
statements. The knowledge claims of logic and mathematics
gained their justification on purely formal grounds, by proof of
their derivability via stated rules from stated axioms and prem-
ises. Thus, the contribution of pure reason to knowledge (in the
form of logic and mathematics) was thought to be easily inte-
grated into the empiricist framework10.
The synthetic statements of the empirical sciences were held
to be cognitively meaningful if—and only if—they were em-
pirically testable in some sense. These statements derived their
justification as knowledge claims from successful tests. For this
purpose, the Vienna Circle applied a meaning criterion. While
the correct formulation was much debated, it mandated that
synthetic statements, which failed testability in principle, were
considered to be cognitively meaningless and to give rise only
to pseudo-problems11. No third category of significance besides
that of a priori analytic and a posteriori synthetic statements
was admitted. In particular, Kant’s synthetic a priori was banned
as having been refuted by the progress of science. Hence, the
Vienna Circle rejected the knowledge claims of metaphysics as
being neither analytic and a priori nor empirical and synthetic.
Combined with the rejection of rational intuition, the Vienna
Circle’s exclusive apportionment of reason into either formal a
priori reasoning, issuing in analytic truths or contradictions, or
substantive a posteriori reasoning, issuing in synthetic truths or
falsehoods, was very characteristic for the philosophy of the
time. The logical empiricist principle stated that there are no
specifically philosophical truths and that the object of philoso-
phy is the logical clarification of thoughts.
Thus, a theory of scientific knowledge was propagated that
sought to renew empiricism by freeing it from the impossible
task of justifying the claims of the formal sciences. The Vienna
Circle strove to reconceptualize empiricism by means of their
interpretation of then recent advances in the physical and for-
mal sciences. Their anti-metaphysical stance12 was supported
by an empiricist criterion of meaning and a broadly logicist
conception of mathematics. Moreover, the Circle sought to ac-
count for the presuppositions of scientific theories by regiment-
ing such theories within a logical framework so that the impor-
tant role played by conventions, either in the form of definitions
or of other analytical framework principles, became evident.
The theories of the Vienna Circle helped to provide the blue-
print for an analytical philosophy of science as a meta-theory.
Limits to Absolute Knowledge
The early part of the twentieth century was the time when
principal limits to the obtainability of knowledge were identi-
fied. For the researchers and philosophers of those days, the
boundaries of knowledge were increasingly revealed during ef-
forts to complete the edifice of the preceding period and put the
scientific discourse on absolutely certain foundations. Contri-
butions to defining these limits came from the natural sciences,
mathematics/computation, and philosophy. Thermodynamics and
particle physics began to expose the confines of the nineteenth
century mechanistic and deterministic world view. Yet, these
were initially observations of specialized sciences that seemed
to show practical rather than profound constraints. However, in
a largely parallel development, attempts at final proofs in meta-
7See Footnote 5.
8British idealism is broadly characterized by a belief in a single all-encom-
passing reality—an absolute, the assignment of reason as the faculty to grasp
the absolute, the rejection of a dichotomy between thought and object.
9The exact authorship of the brochure is subject to some debate (see Uebel
2008).
10“Wir haben die wissenschaftliche Weltauffassung im wesentlichen durch
zwei Bestimmungen charakterisiert. Erstens ist sie empiristisch und
p
ositivistisch: Es gibt nur Erfahrungserkenntnis, die auf dem unmittelbar
Gegebenen beruht. Hiermit ist die Grenze für den Inhalt legitimer Wissenschaft
gezogen. Zweitens ist die wissenschaftliche Weltauffassung gekennzeichnet
durch die Anwendung einer bestimmten Methode, nämlich der der logischen
Analyse. Das Bestreben der wissenschaftlichen Arbeit geht dahin, das Ziel,
die Einheitswissenschaft, durch Anwendung dieser logischen Analyse au
f
das empirische Material zu erreichen.” (Hahn et al., 1929).
11Note the central role of testability that later also played a fundamental role
in Popper’s philosophy, but became limited to being falsifyable.
12“Es hat sich immer deutlicher gezeigt, daß die nicht nur metaphysikfreie,
sondern antimetaphysische Einstellung das gemeinsame Ziel aller bedeutet.”
(Hahn et al., 1929).
G. F. WEBER
Copyright © 2013 SciRes. 271
mathematics (a continuation of the programs developed by
Hilbert and Frege) revealed incompleteness and uncomputabil-
ity. Hence, the restrictions to what is knowable transcend the
applications of physics. Philosophy revealed the logical short-
comings in the principle of induction for testing hypotheses (the
synthetic statements of logical empiricism), a recognition that
led to the development of a hypothetical-deductive theory of
science. Rather than defining a path to absolute knowledge, this
philosophy elucidated a principal limit within the empirical sci-
ences in the impossibility to verify general theories. Despite
these fundamental developments showing the unachievability
of absolute certainty, it was a characteristic of that time that the
knowledge accessible within the confines of the scientific sys-
tems of thought was considered stable and definitive13.
Uncertainty—Heisenberg
In 1900, Max Planck suggested that waves could not be
emitted at an arbitrary rate but only in quanta, each of which
had a certain amount of energy that increased with the fre-
quency of the waves. There was intense debate over the formal
explanation for this phenomenon. Schrödinger’s equation in
quantum mechanics, like the canonical equation in classical
physics, expresses a reversible and deterministic process. If the
wave function at a given instant is known it can be calculated
for any previous or subsequent instant. However, the properties
of particles are measureable only in terms of probability distri-
butions. The Schrödinger equation predicts what the probability
distributions are, but fundamentally cannot predict the exact
result of each measurement. In quantum mechanics, classical
determinism becomes inapplicable; and statistical considera-
tions, introduced through the wave intensity, play a central role.
In 1926, Werner Heisenberg used Planck’s model to describe
the uncertainty principle. The precise description of the future
of a particle depends on the exact determination of its present
position and velocity. Quantum mechanics has shown that no
measurement ever leaves the system to be measured undis-
turbed. So, the position of a particle cannot be measured more
precisely than the wavelength of the light used for its measure-
ment. The shorter the wavelength of the light the higher its
energy, the more it will perturb the velocity of the particle
measured. Hence, the more accurately the position of a particle
is determined the less accurately its velocity can be assessed.
The analogous relationship exists between energy and time.
The uncertainty principle describes a physical limit to the pre-
cision of obtainable knowledge14. Phase space is divisible into
blocks of minimum size that represent states. This limits the
precision of obtainable knowledge. It also has implications for
the information content of an event, and for the analysis of
non-linear events in the ensuing phase of inquiry.
Incompleteness—Gödel
Gödel demonstrated the limits of the axiomatic method. He
constructed an arithmetical formula that represents the meta-
mathematical statement “This formula is not demonstrable” and
showed that it is demonstrable only if its negation also is de-
monstrable. The formula is, therefore, true (by meta-mathe-
matical criteria) and undecidable within the confines of arith-
metic, implying that the axioms of arithmetic are incomplete.
Even if additional axioms were to be assumed so that the true
formula could be derived from the set of arguments, another
true but undecided formula could be constructed in the ex-
panded system. This conclusion holds, no matter how often the
original system is enlarged. Next, Gödel described how to con-
struct an arithmetic formula that represents the meta-mathematic
statement “arithmetic is consistent” and he proved that the for-
mula “if arithmetic is consistent then this formula is not demon-
strable” is formally demonstrable while the statement “arithmetic
is consistent” is not. It follows that the consistency of arithme-
tic cannot be established by an argument that can be repre-
sented in the formal arithmetical calculus. Gödel showed that it
is impossible to give a meta-mathematical proof of the consis-
tency of a system comprehensive enough to contain the whole
of arithmetic unless the proof itself employs rules of inference
different from the transformation rules used in deriving theo-
rems within the system. Therefore, the consistency of the as-
sumptions in the reasoning is as subject to doubt as is the con-
sistency of arithmetic. Furthermore, Gödel characterized a fun-
damental limitation in the power of the axiomatic method by
showing that any system within which arithmetic can be devel-
oped is essentially incomplete, that is there are true arithmetical
statements that cannot be derived from the set of underlying
axioms. He demonstrated the untenability of the assumption
that the totality of true propositions can be developed system-
atically from a set of axioms. It is impossible to establish the
internal logical consistency of a large class of deductive sys-
tems unless one adopts principles so complex that their internal
consistency is as open to doubt as that of the systems them-
selves (Gödel, 1931; Nagel/Newman, 1958).
Uncomputability—Turing/Kolmogorov/Chaitin
With the aim of solving Hilbert’s Entscheidungsproblem chal-
lenge to automate testing the truth of mathematical statements,
Turing introduced a mechanistic approach to a procedure that
could decide their validity. The model of computation he pro-
posed, now called the Turing machine (a universal computer
programmed to carry out any computation whatsoever), con-
sists of an infinite tape that stores symbols and a finite-state
controller that sequentially reads symbols from the tape and
writes symbols to it. The Turing machine is deterministic inso-
far as the tape contents exactly determine the machine’s behavior.
Given the present state of the controller and the next symbol
read off the tape, the controller goes to a unique next state,
writing at most one symbol to the tape. The input determines
the next step of the machine, and the tape input determines the
entire sequence of steps the Turing machine goes through.
According to the Church-Turing thesis, established in 1936
by Alan Turing and Alonzo Church (Emil Post developed similar
concepts independently), a universal Turing machine can com-
pute anything at all computable. At the most basic level, the
Turing machine uses discrete symbols and advances in discrete
time steps. However, not every Turing computation halts when
presented with a given input string. With this recognition, Alan
Turing expanded Gödel’s theorem to state that it may not be
possible to predict whether a universal computer will ever halt
when started with a given input data string15. Turing deduced as
13This is reflected in the literature of that period, which contains ample references to
the limits of objective knowledge (Einstein, 1954; Popper, 1972; Barrow 1998).
14For a philosophical discussion about the inevitably resulting incompleteness
of knowledge see Heisenberg (1984) Chapter 10.
15“Metamathematics was promoted, mostly by Hilbert, as a way of perfecting
the axiomatic method, as a way of eliminating all doubts. But this meta-
mathematical endeavor exploded into mathematicians’ faces, because, to every-
one’s surprise, it turned out impossible to do. Instead it led to the discovery
by Gödel, Turing, and [Chaitin] of metamathematical results, incomplete-
ness theorems, that place severe limits on the power of mathematical rea-
soning and on the power of the axiomatic method.” (Chaitin, 1999: Chapter I).
G. F. WEBER
Copyright © 2013 SciRes.
272
a corollary that there is also no axiomatic system to predict
whether an arbitrary program will ever halt. While Turing’s
analysis demonstrated the completeness of computing formal-
isms16, it showed the incompleteness of deductive formalisms,
which helped start the field of numerical analysis. According to
the principle of computational equivalence (Wolfram, 200217),
all systems that exhibit more than simple behavior have equal
computational powers and can serve as universal computers.
Since no universal computer can outstrip any other, most proc-
esses in the world are inherently computationally irreducible.
Even a set of ultimate rules that run the universe would not
allow any predictions about its outcome without running it
through a computer program. Many simple combinatorial sys-
tems have complicated and unpredictable behavior, which means
they achieve computational universality.
The paradoxes of meta-mathematics (described above) led to
the development of a new formalism in symbolic logic that
attempted to avoid them. From it, programming languages were
developed18. Kolmogorov, Chaitin and Solomonoff put forward
the idea that the complexity of a string of data can be defined
by the shortest binary computer program for computing the string.
Thus, the complexity is the minimal description length. Chaitin
refined computational complexity and algorithmic information
theory (Chaitin, 1975). The halting probability (the Chaitin
constant ) is a real number that represents the probability that
a randomly chosen computer program, having been presented
with an input string, will halt. The complexity of a binary string
is measured by the size of the smallest program for calculating
it. Chaitin defined randomness (lack of structure) via income-
pressibility. A string is random when it cannot be compressed:
a random string is its own minimal program. He reinterpreted
the results from the works of Gödel and Turing by demonstrat-
ing that any attempt to show the randomness of a sufficiently
long binary string is inherently doomed to failure. Hence, there
can be no formal proof whether or not a sufficiently long string
is random19. In some areas, mathematical truth is completely
unstructured and incomprehensible. This occurs in elementary
number theory and in Peano arithmetic. Further, axioms cannot
be used to derive results of higher complexity than their own.
To derive conclusions of high complexity, a highly complicated
axiomatic system is required (Chaitin, 1999).
Unverifiability—Popper
Karl Popper coined the term “critical rationalism” to describe
his philosophy (Popper, 196320). It indicates his rejection of
classical empiricism, and the classical observational-inductive
method of science that was derived from it. Prior to Popper,
induction had been an accepted research approach. In it, con-
clusions are drawn from specific statements to more general
statements. In the empirical sciences, the erection of hypothe-
sis- and theory-systems by induction from specific observations
was considered appropriate. The technique of complete induc-
tion had been formalized in mathematics by Blaise Pascal. Al-
though induction in the applied sciences is never complete,
Bertrand Russell had acknowledged that extrapolation from
scientific observations to general laws of science (which are
presumed to hold in the future) is impossible unless the induc-
tive principle is assumed. Scientific progress would grind to a
halt if one did not assume the legitimacy of extrapolations from
(reproduced) observations to general principles. Investigation
would be trapped in a never-ending process of reconfirming
experiments of the past. The inductive principle is exemplified
in the notion that the sun will rise tomorrow because it has risen
every day thus far.
Popper built on the recognition by David Hume that induc-
tion has logical shortcomings21. He realized that a verification
of all-statements was neither logically consistent nor practically
feasible. Theories are never empirically verifiable. His account
of the logical asymmetry between verifiability and falsifiability
lies at the heart of his philosophy of science. In Popper’s exam-
ple, no matter how many white swans are observed it does not
allow the conclusion that all swans are white. By contrast, the
observation of a single black swan is sufficient to support the
conclusion that not all swans are white22. Hence, the falsifica-
tion of hypotheses and theories is supported by deductive logic
and can be accomplished with one counter-example (the falsi-
fication) if the hypotheses or theories in question are all-state-
ments. The term “falsifiable” means that if a hypothesis is false
this can be shown by observation or experiment. Logically, no
number of positive outcomes at the level of experimental test-
ing can confirm a scientific theory, but a single counter-exam-
ple is logically decisive: it shows the theory, from which the
implication is derived, to be false. The shortcomings of induc-
tion therefore led Popper to the development of a deductive
method of testing. He held that one should rationally prefer the
least likely (simplest, most easily falsifiable) theory that ex-
plains known facts. It is impossible, Popper argued, to ensure
that a theory is true; it is more important that its falsity can be
detected as easily as possible. We cannot know with certainty
what is always true, only what is not.
The demarcation between scientific and transcendental prob-
lems has been an important question in philosophy. In induc-
tion logic, the criterion for the demarcation of the empirical
sciences from mathematics, logic, and metaphysics is definitive,
because the logical form of its statements is such that their veri-
fication or falsification is finally decidable. This is not the case
in the hypothetical-deductive theory of knowledge. Popper took
falsifiability as his criterion of demarcation between what is,
and is not, genuinely scientific23: a theory should be considered
scientific if—and only if—it is falsifiable (Popper, 1935). Like
the Vienna Circle, Popper investigated the testing of hypotheses
16The capability of almost any computer programming language to express
all possible algorithms is now known as computational universality (Wolf-
ram, 2002).
17Chapter 12: The principle of computational equivalence.
18Chaitin points out that his work addresses the Berry paradox (“the first
ordinary number that cannot be named in a finite number of words”); he
delineates it from Gödel’s focus on the liar paradox (“this statement is false”
and Turing’s work on the Russell paradox (see Footnote 5) (Chaitin, 1999).
Here, we support the view that the gnoseological relevance of his work, like
Turing’s, is the elucidation of principal limitations to computability.
19“[…] incompleteness is not accidental, but ubiquitous […]: the probability
that a true sentence of length n is provable in the theory tends to zero when n
tends to infinity, while the probability that a sentence of length n is true is
strictly positive.” (Calud, 2005).
20Introduction, Section XV.
21Hume had shown in his 1739 Treatise of Human Nature that reliance on
experience to draw conclusions to unobserved cases would lead to an infi-
nite regress, as discussed by Popper (1982, Neuer Anhang VII).
22“Bekanntlich berechtigen uns noch so viele Beobachtungen von weissen
Schwänen nicht zu dem Satz, dass alle Schwäne weiss sind.” (Popper, 1935,
Kapitel I.1.).
23For Popper, this was the demarcation of the empirical sciences from
mathematics, logic and metaphysics (Popper, 1935, Kapitel I.4.). Here,
mathematics is considered one of the sciences and the relevant demarcation
is the one from metaphysics.
G. F. WEBER
Copyright © 2013 SciRes. 273
(“synthetic statements”). However, rather than defining a path
to absolute knowledge he identified the principal limitation that
verification of general hypotheses (“all-statements”) is impos-
sible.
Although associated with some great advances, this period
was largely characterized by defining the limitations in the strife
for absolute knowledge that had been initiated from around
1880 through the 1920s. To some degree, it generated a crisis in
the theory of knowledge because rather than elucidating new
possibilities, its most influential works espoused on limits to
obtainable knowledge (in essence elucidating impossibilities).
Dynamic Knowledge
The discovery of non-linear systems dynamics, mainly in the
1960s, and its ensuing rapid research progress (aided in part by
the increasing availability of computer simulations) has pro-
foundly impacted the natural sciences. The inherent emergent
properties rooted in a high sensitivity to the initial conditions of
such systems also have required reevaluations of existing theo-
ries of knowledge. No longer is certainty attainable within the
limits defined from the 1920s through the 1960s. Knowledge is
fluid—it can be produced and destroyed, and it is always prob-
abilistic.
The dynamic nature of knowledge has been established in
complex systems research of non-periodic flow (by Lorenz) and
emergent processes (by Prigogine and Kauffman), in which
information is generated and lost (Shaw). Complexity research
has also broken the dichotomy between chance and necessity
with the definition of degrees of randomness (Crutchfield). The
investigations are strongly influenced by two recognitions of
the preceding period, uncertainty and computational complex-
ity.
Non-linear systems research often describes events as tra-
jectories in phase space. The uncertainty principle assures
that two trajectories become indistinguishable after they have
approached each other below a minimum distance. Further,
bifurcation points, where a system can evolve toward one
state or another, are at the heart of non-linear systems. The
infinite accuracy of measurement at the bifurcation point
that would be required to predict which state a system will
assume is impossible. Hence, unpredictability and changes
in information content by complex systems are rooted in the
uncertainty principle.
Non-linear systems dynamics draws heavily on information
theory to establish new concepts of chance and necessity. In
the 1940s, Claude Shannon (1948) had developed the mod-
ern concept of information theory. Communication occurs
between a sender and a receiver via a channel. The channel
capacity is a critical determinant, which is calculated from
the noise characteristics of the channel. For all communica-
tion rates below channel capacity, the probability of error
can be made arbitrarily small. However, theoretically opti-
mized communication schemes may be computationally im-
practical. Random processes have an irreducible complexity
below which the signal cannot be compressed. Shannon
named the ultimate data compression the entropy. Entropy
and mutual information are functions of the probability dis-
tributions that underlie the process of communication. The
Kolmogorov-Chaitin complexity (K) is approximately equal
to the Shannon entropy (H) if the sequence of the string
under study is drawn at random from a distribution that has
the entropy H (Kolmogorov, 1968). Specifically, for almost
all infinite sequences produced by a stationary process the
growth rate of the Kolmogorov-Chaitin complexity is the
Shannon entropy rate. Thus, the insights derived from un-
computability contribute to the foundations of non-linear
systems research and its epistemological implications.
Non-Periodic Flow—Lorenz
A lack of periodicity is very common in natural systems, and
is one of the distinguishing features of turbulent flow. Because
instantaneous turbulent flow patterns are so irregular, attention
to them was often confined to the statistics of turbulence, which,
in contrast to the details of turbulence, often behave in a regular
well-organized manner. A closed hydrodynamic system of fi-
nite mass may ostensibly be treated mathematically as a (usu-
ally very large) finite collection of molecules, in which case the
governing laws are expressible as a finite set of ordinary dif-
ferential equations. These equations are generally highly intrac-
table, and the ensemble of molecules is usually approximated
by a continuous distribution of mass. The governing laws are
then expressed as a set of partial differential equations, con-
taining such quantities as velocity, density, and pressure as de-
pendent variables. It is sometimes possible to obtain particular
solutions of these equations analytically, especially when these
solutions are periodic or invariant with time. Ordinarily, how-
ever, non-periodic solutions cannot readily be determined, ex-
cept by numerical procedures24.
A finite system of ordinary differential equations represent-
ing forced dissipative flow often has the property that all of its
solutions are ultimately confined within the same bounds. A
non-periodic solution with no transient component must be
unstable in the sense that solutions temporarily approximating
it do not continue to do so. A non-periodic solution with a tran-
sient component is sometimes stable, but in this case its stabil-
ity is one of its transient properties, which tends to die out (Lo-
renz, 1963). Finite systems of deterministic ordinary non-linear
differential equations may be designed to represent forced dis-
sipative hydrodynamic flow. Solutions of these equations can
be identified with trajectories in phase space. Systems with
bounded solutions possess bounded numerical solutions25.
Prediction of the sufficiently distant future is impossible by
any method, unless the initial conditions are known exactly (a
feat impossible to accomplish according to Heisenberg’s un-
certainty relation). The foundation of Lorenz’s principal result
is the eventual necessity for any bounded system of finite di-
mensionality to come arbitrarily close to acquiring a state it has
previously assumed. Only if the system is stable, will its future
development then remain arbitrarily close to its past history,
and it will be quasi-periodic26. The discovery by Lorenz of
non-linear dynamic flow, the outcome of which sensitively de-
pends on the initial conditions, confined the dictum of predict-
24The lack of closed form solutions for non-linear differential equations has
elevated the status of computer modeling in research from its use as a rather
p
reliminary analysis that merely guides the path toward formal proof to a
third methodology beside experimentation and logical deduction.
25“For those systems with bounded solutions, (…) non-
p
eriodic solutions are
ordinarily unstable [under the influence of] small modifications, so that
slightly differing initial states can evolve into considerably different [out-
come] states” (Lorenz, 1963).
26Instable systems display the now famous “butterfly effect”: One flap of a
butterfly’s wings may change the future course of the weather in a place far
away.
G. F. WEBER
Copyright © 2013 SciRes.
274
ability in science27.
Emergence—Prigogine/Kauffman
The emergence of ordered structures from non-equilibrium
conditions was described in chemistry by Ilya Prigogine and in
evolution by Stuart Kauffman. Historically, physics has studied
reversible processes. Classical physics, including quantum the-
ory and relativity theory, have provided only limited models of
temporal development. The past and the future are described as
trajectories in phase space, which implies that both are some-
how contained in the present. The Schrödinger equation is
completely deterministic28, and the macroscopic thermodynamic
description typically focuses on mean values for which random
fluctuations became negligible, whereas quantum mechanics
introduced a probabilistic description on the microscopic level.
States in thermodynamic equilibrium (or states that equate to a
minimal entropy production in the linear thermodynamics of
non-equilibrium) are stable states. Yet, irreversible processes
play a fundamental constructive role in the physical world. The
laws of irreversible processes (Prigogine, 1980) embed dyna-
moics in a more comprehensive formalism that includes insta-
ble states. Non-equilibrium can lead to dissipative structures,
wherein fluctuations introduce a stochastic description into the
macroscopic level. Instabilities far from equilibrium are essential
elements for emerging systems. In the vicinity of their bifurca-
tions the law of large numbers is not valid anymore. While the
molecular interactions in chemical reactions far from equilib-
rium are the same as in equilibrium, they also become depend-
ent on global conditions. The transition from the dynamic,
time-reversible description of mechanics to the description of
emerging processes is accomplished through a particular form
of a non-local transformation, in which the homogeneity of the
space-time structure is destroyed, entropy and time become
operators. This transition involves an internal time that is de-
rived from the indeterminism of the trajectories in unstable
dynamic systems. The transformation leads to a spacio-tem-
porally non-local description.
The initiating event for every step in evolution is an error in
reproductive invariance29 (a mutation). Such chance event is the
origin of any innovation and creation in living nature. Once a
mutation has taken place, its penetration of the population is
subjected to the rule of selection. However, simple and com-
plex systems can exhibit powerful self-organization. The effects
of mutation and selection are diminished when operating on
systems that have their own rich and robust self-ordered prop-
erties30. As the complexity of regulatory networks under selec-
tion increases (“complexity catastrophe”), selection is ultimately
limited by:
being too weak in the face of mutations to hold a population
at small volumes of the ensemble, which exhibit rare prop-
erties; hence, typical properties are encountered instead
or if selection is very strong, the population typically be-
comes trapped on suboptimal peaks of an adaptive land-
scape, which do not differ substantially from the average
properties of the ensemble.
Evolution can be viewed as occurring in an imaginary space,
the shape of which is defined by the distribution of properties
across an ensemble (a “fitness landscape”31) (Kauffman, 1993).
Spontaneous order is maintained despite selection, not because
of it. However, selection may be able to change ensembles of
self-organized systems by mitigating the tendency for adaptive
processes to become trapped on continuously lower local op-
tima of fitness as complexity increases. Below a critical com-
plexity of an organism, the selective force is stronger than the
mutational force. Selection can either hold the population at the
global optimum or pull it there from a suboptimal genotype.
Above the critical complexity, the dispersing mutational pres-
sure increases, and the population falls from the global opti-
mum to a suboptimal stationary steady state.
Generation and Destruction of Information—Shaw
The energy of physical systems can be described on the
macro-scale, which in classical mechanics is completely intelli-
gible, and the micro-scale of thermal motion, which to classical
mechanics is unintelligible but can be successfully ignored32.
Shaw applied information theory to the measurements of dy-
namical systems. It was his recognition that there are non-con-
servative systems, where there may be an active flow of infor-
mation between the macro- and micro-scales. Simple system
equations displaying turbulent behavior are capable of acting as
an information source. According to Heisenberg’s uncertainty
principle, trajectories in phase space are distinguishable only to
a lower limit of distance between them33. In laminar flow, mo-
tion is governed by boundary and initial conditions, no new
information is generated. In turbulent flow, information is con-
tinuously generated by the flow itself. The transition of a sys-
tem from laminar to turbulent behavior corresponds to a change
of the system from an information sink to an information source34.
The new information of turbulent systems precludes prediction
past a certain time, when new information has accumulated to
27“The result has far-reaching consequences when the system being consid-
ered is an observable nonperiodic system whose future state we may desire
to predict. It implies that two states differing by imperceptible amounts may
eventually evolve into two considerably different states. If, then, there is any
error whatever in observing the present state-and in any real system such
errors seem inevitable-an acceptable prediction of an instantaneous state in
the distant future may well be impossible.” (Lorenz, 1963).
28Compare the section on uncertainty above.
29The term reproductive invariance was originally used by Monod (1985).
30“[…] to combine the themes of self-organization and selection, we must
expand evolutionary theory so that is stands on a broader foundation and
then raise the new edifice. That edifice has a least three tiers:
We must delineate the spontaneous sources of order, the self-organized
properties of simple and complex systems which provide the inherent or-
der evolution has to work with ab initio and always.
We must understand how such self-ordered properties permit, enable, and
limit the efficacy of natural selection. […] In short, we must integrate the
fact that selection is not the sole source of order in organisms.
We must understand which properties in complex living systems confer on
the systems their capacities to adapt. […] (Kauffman, 1993).
31Local optima on a rugged fitness landscape and attractors in phase space
are alternative metaphors for the same phenomenon. They map a preferred
state to be assumed by a dynamic system. The shape of the attractor or the
ruggedness of the fitness landscape are reflections of the complexity of the
system.
32These micro-scales constitute a lower limit of explanation. In the eras
preceding non-linear systems research, their states were assumed to be
uniform or stochastic (McKelvey, 1998).
33In applying the uncertainty principle that identifies the minimum resolv-
able product of bandwidth and time in the description of the frequency of a
p
hoton, Shaw divides up phase space into minimum resolvable blocks that
identify “states”. (Shaw, 1981).
34“The chief qualitative difference between laminar and turbulent flow is the
direction of information flow between the macroscopic and microscopic
length scales. (…) Entropy increases in both laminar and turbulent systems,
that is, energy in both cases moves from macroscopic to microscopic de-
grees of freedom.” (Shaw, 1981).
G. F. WEBER
Copyright © 2013 SciRes. 275
displace the initial data35. In non-periodic flow, closed form
predictions are impossible because the information they would
represent simply does not exist prior to the operation of the
mechanism. “New information is continuously being injected
into the macroscopic degrees of freedom of the world by every
puff of wind and every swirl of water” (Shaw, 1981). This es-
tablishes, by law of nature, a transience for the intelligibility of
the universe. With the inevitable and always prevalent genera-
tion and destruction of information in non-linear systems, knowl-
edge has become fluid and dynamic.
Degrees of Randomness—Crutchfield
One designs clocks to be as regular as physically possible, so
much so that they are the very instruments of determinism. The
coin flip, by contrast, expresses our ideal of total randomness.
Although randomness is as necessary to physics as determinism,
the clock and the coin flip are mathematical ideals36. Many
domains face the confounding problems of detecting random
and patterned components in processes under study. These tasks
translate into measuring their intrinsic computation. Like Shaw,
Crutchfield applied Shannon’s information theory to the analy-
sis of complex systems, viewing every process as a channel that
communicates its past to its future through its present37. Simi-
larly, he viewed model building in terms of a channel through
which experimentalists communicate results to one another.
Crutchfield (2012) compared the deterministic and statistical
descriptions of complexities, which despite their different teleolo-
gies are related and essentially complementary in physical sys-
tems.
One approach that models system behaviors by applying
exact deterministic representations leads to the determinis-
tic complexity that allows us to measure degrees of ran-
domness. Kolmogorov-Chaitin complexity is a measure of
randomness, not a measure of structure.
Ensembles of behaviors can be measured with statistical
complexity that assesses degrees of structural organization.
One solution, familiar in the physical sciences, is to dis-
count for randomness by describing the complexity in en-
sembles of behaviors. The unpredictability of deterministic
chaos forces investigators to use the ensemble approach.
A synthesis of those descriptions is articulated in computa-
tional mechanics, an extension of statistical mechanics that
describes not only a system’s statistical properties but also how
it stores and processes information—how it computes38. At root,
extracting the representation of a process is accomplished by
grouping histories together that make the same predictions, the
groups themselves capture the relevant information for predict-
ing the future. This leads to the definition that the equivalence
classes of the relation are the process’s causal states S (its re-
constructed state space), and the induced state-to-state transi-
tions are the process’s dynamic T (its equations of motion).
Together, the states S and dynamic T give the process’s so-
called ε-machine that describes the effective states, that is the
property of the statistical complexity as the amount of informa-
tion the process stores in its causal states. The ε-machine (states
plus dynamic) forms a semi-group that gives all of a process’s
symmetries, including noisy symmetries (Shalizi, 2001). The
statistical complexity has an essential kind of representational
independence. The causal equivalence relation, in effect, ex-
tracts the representation from a process’s behavior. Causal
equivalence can be applied to any class of system—continuous,
quantum, stochastic or discrete.
The statistical complexity defined in terms of the ε-machine
solves the main problems of the Kolmogorov-Chaitin complex-
ity by being representation independent, constructive, the com-
plexity of an ensemble, and a measure of structure. In these
ways, the ε-machine gives a baseline against which any meas-
ures of complexity, or modeling in general can be compared. It
is a minimal sufficient statistic that captures a system’s pattern
in the algebraic structure of the ε-machine. The degree of ran-
domness of a system is defined as a process’s ε-machine Shan-
non entropy rate. Its amount of organization is defined in a
process with its ε-machine’s statistical complexity. The ε-ma-
chine approach demonstrates how the framework of determinis-
tic complexity relates to computational mechanics. With it,
Crutchfield was able to break down the dichotomy between ne-
cessity and chance.
Complexity often arises at the order/disorder border. There is
a tendency for natural systems to balance order and chaos, to
move to this complex interface between predictability and un-
certainty. This often appears as a change in a system’s intrinsic
computational capability. Natural systems that evolve by inter-
action with their immediate environment exhibit both structural
order and dynamical chaos. Order is the foundation of commu-
nication between elements at any level of organization. Chaos
is the dynamical mechanism by which nature develops con-
strained and useful randomness. From it follows diversity (Crutch-
field, 2012).
Conclusion
The evolution in the concepts of knowledge over the past
century has important implications. The historical development
outlined here reflects the persistence of key questions and key
techniques over decades, which are applied to enhance certainty,
but result in displaying its limits. In using meta-mathematical
formulations, Gödel found limitations in the axiomatic method.
In an attempt to automate testing the truth of mathematical
statements, Turing and later Kolmogorov and Chaitin discov-
ered uncomputability. Even though these discoveries identified
boundaries to what is knowable, they provided techniques for
analyzing systems that were previously intractable. The Kol-
mogorov complexity is approximately equal to the Shannon
entropy of information theory. Complexity and entropy are two
measures that have been amply applied to describe the dynamic
nature of knowledge.
We live in a culture that treats knowledge as cumulative, as
persistently increasing39. Yet, non-linear systems dynamics dem-
35“The transition of a system from laminar to turbulent behavior is under-
standable in terms of the change of [the Liapunov characteristic exponent]
from negative to positive, corresponding to the change of the system from
an information sink to a source. The new information of turbulent systems
p
recludes predictability past a certain time; when information accumulates
to displace the initial data, the system is undetermined.” (Shaw, 1981: Ab-
stract)
36“The extreme difficulties of engineering the perfect clock and implement-
ing a source of randomness as pure as the fair coin testify to the fact that
determinism and randomness are two inherent aspects of all physical proc-
esses” (Crutchfield, 2012).
37We start from the simple principle that model variables should, as much as
possible, render the future and past conditionally independent (Still, 2007).
38“[…] one sees that many domains face the confounding problems of de-
tecting randomness and pattern. I argued that these tasks translate into
measuring intrinsic computation in processes and that the answers give us
insights into how nature computes.” (Crutchfield, 2012).
39Note the subtitle “The Growth of Scientific Knowledge” in Popper (1963).
G. F. WEBER
Copyright © 2013 SciRes.
276
onstrates that the loss of information, and with it the loss of
knowledge, is as inevitable as the emergence of new informa-
tion by non-periodic flow. Intuitively, the claim that knowledge
—once acquired—is not permanent may seem defective. How-
ever, the dynamic nature of knowledge may be well illustrated
with the example of archeology. This branch of science tries to
recover information that once was obvious but has been lost. A
full information content of the past can never be reconstructed
(as delineated by Shaw (1981), the old information has been
replaced). Conversely, some information (for example the DNA
sequence of dinosaurs) was implicit in the system, but can be
explicated as knowledge only with today’s technology. Ergo,
knowledge is in flux, being constantly generated and destroyed.
It could be argued that the concept of knowledge espoused
here is flawed as knowledge does not equate to information, so
the generation or destruction of information has no bearing on
the evolution of knowledge. While we concur in making a dif-
ferentiation between information and knowledge we neverthe-
less assert that knowledge, in a scientific sense, requires infor-
mation as its basis. Without the empirical component of infor-
mation collected, knowledge turns into a transcendental type of
certainty, which is outside the realm of science. Of note, Shan-
non (1948) used his definition of information as a basis for his
analyses of communication—an essential component in the
generation of knowledge.
The continuity of experience causes us to perceive the uni-
verse as one entity. In contrast, the description of nature typi-
cally categorizes observations and creates opposites that are
seemingly unrelated, thus generating sub-entities of the world
that are mutually disconnected. Among the starkest of these
opposites is the separation of causative events from chance
events. An entirely deterministic world view makes human deci-
sions futile and leads to fatalism, whereas a stochastic view
eliminates the need for decisions due to the random nature of
future events, in finality leading to nihilism. Currently, the most
prevalent way of dealing with this conflict is the perception that
some events are subject to cause-effect relationships, while
others are stochastic (random) in nature. This dualism inevita-
bly creates two worlds, which are mutually unconnected. The
evolution in the theory of knowledge over the past century has
accomplished (in its most recent, third period) a reconnection of
the two world views, not as opposites that compete for the con-
trol over nature but as alternative descriptions of one unified
nature. Yet, this progress has forced us to give up on the ideal-
istic goals of certainty and completeness in knowledge.
Scientific observation always originates in a hypothesis and
is deduced therefrom according to set rules of logic and meth-
odology40. The axioms in mathematics are, in fact, hypotheses,
rendering this field of inquiry a hypothetical-deductive science,
like the natural sciences. As hypothesis-free observation does
not exist, research moves from hypotheses (which in philoso-
phical terms are the prior, starting points that can be chosen
quite freely) to deductions and observations that are consistent
with them. It formulates coalescent theories as mutually con-
sistent sets of hypotheses. However, it must also permanently
question and reexamine the original hypotheses and their roots.
Scientific inquiry needs to move from the prior to all possible
directions, to more basic as well as more applied questions. The
starting hypothesis is strengthened or weakened by the prepon-
derance of evidence, not refuted by final proof (as envisioned
by Popper). If hypotheses and theories can neither be proven
nor refuted definitively—if the best that can be achieved is
preponderance of evidence in favor or against a hypothesis,
what constitutes the ultimate goal of the scientific enterprise?
We can assume that the number of possible hypotheses about
the world is practically limitless. However, the ideal theory of
the world, while always incomplete, will be inherently flaw-
lessly consistent41. We speculate that there is only one such
possibility. Implied in this model is the notion that scientific
progress is accomplished by addressing and eliminating incon-
sistencies.
REFERENCES
Barrow, J. D. (1998). Impossibility. The limits of science and the sci-
ence of limits. Oxford: Oxford University Press.
Calude, C. S., & Jürgensen, H. (2005). Is complexity a source of in-
completeness? Advances in Applied Mathematics, 35, 1-15.
http://dx.doi.org/10.1016/j.aam.2004.10.003
Chaitin, G. (1975). A theory of program size formally identical to in-
formation theory. Journal of the ACM, 22, 329-340.
http://dx.doi.org/10.1145/321892.321894
Chaitin, G. (1999). The unknowable. Singapore City: Springer.
Crutchfield, J. P. (2012). Between order and chaos. Nature Physics, 8,
17-24. http://dx.doi.org/10.1038/nphys2190
Einstein, A. (1954). Remarks on Bertrand Russell’s theory of knowl-
edge. In C. Seelig, & S. Bargmann (Eds.), Ideas and opinions (pp.
18-24). New York: Bonanza Books.
Favre, A., Guitton, H., Guitton, J., Lichnerowicz, A., & Wolff, E.
(1988). Chaos and determinism. Turbulence as a paradigm for com-
plex systems converging toward final states. Baltimore: The Johns
Hopkins Univeristy Press.
Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia
Mathematica und verwandter Systeme I. Monatshefte für Mathematik
und Physik, 38, 173-198. http://dx.doi.org/10.1007/BF01700692
Hahn, H., Neurath, O., & Carnap, R. (1929). Wissenschaftliche Wel-
tauffassung. Der Wiener Kreis. Wien: Artur Wolf Verlag.
Hall, N. (1993). Exploring chaos. A guide to the new science of disor-
der. New York: W. W. Norton & Company.
Heisenberg, W. (1984). Der Teil und das Ganze. Gespräche im
Umkreis der Atomphysik (8th ed.). München: Deutscher Taschen-
buch Verlag.
Kauffman, S. A. (1993). The origins of order. Self-organization and se-
lection in evolution. New York/Oxford: Oxford University Press.
Kolmogorov, A. N. (1968). Logical basis for information theory and
probability theory. IEEE Transactions on Information Theory, 14,
662-664. http://dx.doi.org/10.1109/TIT.1968.1054210
Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the
Atmospheric Sciences, 20, 130-141.
http://dx.doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;
2
McKelvey, B. (1998). Thwarting faddism at the edge of chaos. Brussels:
European Institute for Advanced Studies in Management Workshop
on Complexity and Organization.
Monod, J. (1985). Zufall und Notwendigkeit. Philosophische Fragen
der modernen Biologie (7th ed.). München: Deutscher Taschenbuch
Verlag.
Nagel, E., & Newman, J. R. (1958). Gödel’s proof. New York: New
York University Press.
Popper, K. R. (1935). Logik der Forschung. Zur Erkenntnistheorie der
modernen Naturwissenschaft. Wien: Verlag Julius Springer.
Popper, K. R. (1982). Logik der Forschung. Zur Erkenntnistheorie der
modernen Naturwissenschaft (7th ed.). Tübingen (J.C.B. Mohr).
Popper, K. R. (1963). Conjectures and refutations. The growth of sci-
40Remarkably, Heisenberg’s discovery of the uncertainty relation was guided
b
y this recognition, which had earlier been communicated to him by Einstein
“Erst die Theorie entscheidet darüber, was man beobachten kann”. Heisen-
berg (1984), Chapter 6.
41This requirement is mandated by the recognition that a single antinomy is
sufficient to derive (“prove”) any conclusion whatsoever.
G. F. WEBER
Copyright © 2013 SciRes. 277
entific knowledge. New York: Routledge & Kegan Paul.
Popper, K. R. (1972). Objective knowledge. Oxford: Clarendon Press.
Prigogine, I. (1980). From being to becoming. San Francisco: W.H.
Freeman and Company.
Ruelle, D. (1991). Chance and chaos. Princeton: Princeton University
Press.
Shalizi, C. R., & Crutchfield, J. P. (2001). Computational mechanics:
Pattern and prediction, structure and simplicity. Journal of Statistical
Physics, 104, 817-879. http://dx.doi.org/10.1023/A:1010388907793
Shannon, C. E. (1948). A mathematical theory of communication. Bell
Systems Technology Journal, 27, 379-423, 623-656.
Shaw, R. (1981). Strange attractors, chaotic behavior, and information
flow. Zeitschrift Für Naturforschung, 36a, 80-112.
Still, S., & Crutchfield, J. P. (2007). Structure or noise? Santa Fe Insti-
tute Working Paper, 07-08-020. http://arxiv.org/pdf/0708.0654.pdf
Uebel, T. (2008). Writing a revolution: On the production and early
reception of the Vienna circle’s manifesto. Perspectives on Science,
16, 70-102. http://dx.doi.org/10.1162/posc.2008.16.1.70
Wolfram, S. (2002). A new kind of science (pp. 715-846). Champaign,
IL: Wolfram Media.