Power-Law Distributions in Hard Drive Behavior711
parameters, i.e., access time, interleave, seek time, rota-
tional speed and latency, buffer size, data transfer rate,
number of clusters, power consumption, etc.
We will follow a way, in which complex systems ap-
proach will be the main motive. It is based on the idea of
holism and the research is connected with such terms as:
statistical self-similarity, long-range dependencies, per-
colation, non-extensive thermodynamics, thermodynamic
non-equilibrium, power-laws, phase transitions, small wor-
lds, scale-free networks, motifs, hierarchy, etc. Taking
into account the (presented above) definition of system
the existence of power-laws in the case of hard drive
behavior will be related not only to hardware properties
but also to the processes that appear inside it during proc-
essing. To be more precise: a hard drive behavior will be
described in terms of physical phenomena [8] and their
properties but basing on the approach that from one hand
will take the hard drive physical properties (a more gen-
erally: hardware features) and processed on hard drive
tasks (a more generally: software features). This will be
done, because if in the complex systems approach we are
forced to focus on how each component behaves and acts
together with other components thus in the case of com-
puter systems, for example, we cannot separate the hard-
ware behavior form the software behavior, the network
topology from the packets flow, algorithm from the input
data, etc. Generally, we cannot separate the processed
tasks from the processing environment. They cooperate
giving us the picture of the whole system behavior.
The paper is divided into four Sections. After the In-
troduction, in Section 2 we have the description of ex-
periment, and further in Section 3 the results of research.
They present a basic properties of probability distribu-
tions with scale free property and possible consequences
of some parameters values interpretation. The paper is
crowned in Section 4. The approach presented in this pa-
per is a continuation of work [8] and can be considered
as a further evidence of complex behavior of computer
systems thus the paradigm change for their analysis is
needed.
2. Experiment
The research presented in this paper was done basing on
one personal computer, which worked under Windows 7
system. The configuration of computer was as follows:
Dualcore Intel® Pentium® processor T2390 with
f = 1.86 GHz;
Cache L2 level: 1 MB;
RAM 2 GB, DDR Technology;
Hard drive: Hitachi® Travelstar 5K250 with capac-
ity 250 GB, 5400 rpm, SATA Interface; average latency:
5.5 ms, average time of seeking: 11 ms; max data transmi-
ssion rate (buffer-host) 150 MB/s with buffer size 8 MB.
As it can be seen these parameters locate this disk
among the average ones, but we are interested in its dy-
namical behavior in terms of physics not in terms of its
technological properties. Obviously, these paremeters are
very important ones because they establish its limitations,
but in our approach a hard drive performance will be
presented in relation to the processing that is performed
in computer system.
In order to collect the necessary data for analysis there
was used a Performance Monitor, i.e., an inner monitor
that is available in Administration Panel in Windows
operating system starting from Windows ME edition. This
program (called perfmon) allows tracking many different
parts of the system basing on the idea of different count-
ers that can be configured for the computer system as a
whole and also for its particular parts and even for par-
ticular processed computer programs. It is a very inter-
esting tool in which the system administrator (but also
operating system itself) basing on Windows properties
not only can trace its actual behavior but also record dif-
ferent data sets for further statistical analysis. The time
interval can be set starting from 1 s thus during one hour
of system tracing 3600 samples can be obtained. Some of
the counters represent the average values, but most of
them show real data. One of the most important property
of this monitor is a fact that its usage almost doesn’t in-
fluence the overall systems performance and behavior,
because perfmon shows information that is normally
collected for Windows work. In other words: no matter if
perfmon works or not such data are always traced be-
cause this ensures normal, stable work of Windows oper-
ating system.
During the tracing of computer system a workload was
generated, but a short remark about this is needed. There
are two approaches for workload generation—both of
them have their own advantages and disadvantages. In
the first one it can be assumed that the workload will be
given basing on special tests (for example benchmarks)
or other techniques—such an approach allows for differ-
ent combinations of this workload generation and also
guarantees that the experiment can be repeated for dif-
ferent configurations of the system hardware level. But it
should be also noted that such a high and extreme work-
load can be considered as an artificial one, because nor-
mally during work the user doesn’t use any special ben-
chmarks or computer programs that constantly generate
such a workload. To be more precise: if we observe a
typical user that is working with the computer we can say
that she/he is using a set of applications, for example:
office applications, web browser, internet communicator,
mail program, video player, peer-to-peer system, etc.
which generate a “normal” (average) workload. Obviously,
during the work a way of each program usage is dependent
Copyright © 2011 SciRes. JSEA