Development of Automatically Updated Soundmaps for the Preservation of Natural Environment 1390
Table 1. Sound event.
Event_id Microphone_id Time stamp (date, time) Level 1: Coarse classificationLevel 2: Intermediate classification Level 3: Fine classification
1423 4 05-15-2010, 19:35 Biophysical Bird Lanius senator
cation number” which is a unique number that corres-
ponds to a specific sound event appearing in the “list of
sound events” of the database. Microphone_id (Table 1)
stands for “microphone identification number” and repre-
sents the number given to each microphone. The “mi-
crophone identification number” indicates the exact geo-
graphical area where a sound event is recorded; the geo-
graphical area is derived from the coordinates where the
microphone has been placed.
The sound events are retrieved from the database in
order to form the hierarchical soundmap, which includes
three levels:
The first, top-level presents the coarse classification
results, by means of three markers, for the three afore-
mentioned general classes of environmental sounds
(anthropogenic, biophysical and geophysical, see Fi-
gure 2(a)). Note that, the location of the markers on
the map correspond to the spot where the relative
reco r d ing was take n.
The second level presents the intermediate classifi-
cation results, within the three main classes of the
first level, e.g., species of fox, frog, bird and wolf
within the class of biophysical sounds, (see Figure
2(b)-i).
Finally, the third level presents the fine classifica-
tion results, e.g., specific bird ( here a Lanius senator)
within the intermediate class of birds, see Figure
2(b)-ii.
3. Conclusions
Monitoring through the development and periodic update
of soundmaps is a tool of practical interest for environ-
mental surveillance of sensitive areas, e.g., regions of the
NATURA 2000 network. A method for the development
of AUSs for such areas is proposed and tested on a simu-
lated environmental setup, with encouraging results. Fur -
ther experimentation and adaptation with real field data
is necessary before an efficient implementation is avail-
able.
4. Acknowledgements
Research co-funded by the EU (European Social Fund)
and national funds, action “Archimedes III—Funding of
research groups in T.E.I.”, under the Operational Pro-
gramme “Education and Lifelong Learning 2007-2013”.
REFERENCES
[1] A. D. Mazaris, A. S. Kallimanis, G. Hatzigiannidis, K.
Papadimitriou and J. D. Pantis, “Spatiotemporal Analysis
of an Acoustic Environment: Interactions between Land-
scape Featu res and Sound ,” Landscape Ecology, Vol. 24,
No. 6, 2009, pp. 817-831.
do i:10.1007/s10980-009-9360-x
[2] B. Krause, “Bioacoustics, Habitat Ambience in Ecolo-
gical Balance,” Whole Earth Review, Vol. 57, 1987, pp.
267-271.
[3] B. Krause, “Wild Sound scapes: Discovering the Vo ice of
the Natural World,” Wilderness Press, Berkeley, 2002.
[4] R. M. Schafer, “The Soundscape: Our Sonic Environment
and the Tuning of the World,” Destiny Books, Rochester,
1993.
[5] M. G. Turner, R. H. Gardner and R. V. O’Neill, “Land-
scape Ecology in Theory and Practice: Pattern and Proc-
ess,” Springer-Verlag, New York, 2 001.
[6] SEKI Group, “Measurement and Analysis of Environ-
mental Acoustics in Sequoia National Park: A Sound-
scape P erspective,” 2010.
http://envirosonic.cevl.msu.edu/seki
[7] R. O. Duda, P. E. Hart and D. G. Stork, “Pattern Classifi-
cation,” 2nd Edition, John Wiley & Sons, Ltd., Hoboken,
2000.
[8] E. Wold, T. Blum, D. Keislar and J. Wheaton, “Content-
based Classification, Search and Retrieval of Audio,” IEEE
Multimedia, Vol. 3, No. 3, 1 996, pp. 27-36.
doi:10.1109/93.556537
[9] T. Zhang and C. C. J. Kuo, “Audio Content Analysis for
Online Audiovisual Data Segmentation and Classifica-
tion,” IEEE Transactions on Speech and Audio Process-
ing, Vol. 9, No. 4, 2001, pp. 441-457.
do i:10.1109/89.917689
[10] I. Paraskevas, S. M . Po tir akis and M. Ran goussi, “Natu ral
Soundscapes and Identification of Environmental Sounds:
A Pattern Recognition Approach,” 16th International Con-
ference on Digital Signal Processing (DSP’09), Santorini,
5-7 July 2009, pp. 1-6.
[11] I. Paraskevas and E. Chilton, “Combination of Magnitude
and Phase Statistical Features for Audio Classification,”
Acoustics Research Letters Online, Vol. 5, No. 3, 2004,
pp. 111- 117. doi:10.1121/1.1755731
Copyright © 2011 SciRes. JEP