Journal of Biomedical Science and Engineering
Vol.08 No.07(2015), Article ID:58370,12 pages
10.4236/jbise.2015.87043

Developing an Evolutionary Algorithm to Search for an Optimal Multi-Mother Wavelet Packets Combination

Ohad Bar Siman Tov1, J. David Schaffer2, Kenneth J. McLeod1,2

1Department of Electrical and Computer Engineering, State University of New York at Binghamton, Binghamton, NY, USA

2Department of Bioengineering, State University of New York at Binghamton, Binghamton, NY, USA

Email: Ohad@binghamton.edu

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 29 August 2013; accepted 24 July 2015; published 28 July 2015

ABSTRACT

The wavelet transform is a popular analysis tool for non-stationary data, but in many cases, the choice of the mother wavelet and basis set remains uncertain, particularly when dealing with physiological data. Furthermore, the possibility exists for combining information from numerous mother wavelets so as to exploit different features from the data. However, the combinatorics become daunting given the large number of basis sets that can be utilized. Recent work in evolutionary computation has produced a subset selection genetic algorithm specifically aimed at the discovery of small, high-performance, subsets from among a large pool of candidates. Our aim was to apply this algorithm to the task of locating subsets of packets from multiple mother wavelet decompositions to estimate cardiac output from chest wall motions while avoiding the computational cost of full signal reconstruction. We present experiments which show how a continuous assessment metric can be extracted from the wavelets coefficients, but the dual-objective nature of the algorithm (high accuracy with small feature sets) imposes a need to restrict the sensitivity of the continuous accuracy metric in order to achieve the small subset size desired. A possibly subtle tradeoff seems to be needed to meet the dual objectives.

Keywords:

Wavelet Transform, Data Mining, Genetic Algorithm

1. Introduction

Wavelet analysis has become one of the most commonly used digital signal processing tools, with applications in data compression, image processing, time series data filtering, material detection and de-noising [1] [2] . Wavelets are particularly well suited for non-stationary time series data analysis wherein the time localization of the frequency components is important. Over the past decade, the use of wavelet analysis has increased rapidly in the biomedical field with analysis being applied to remove base line variation and high frequency components from the electrocardiogram (ECG) and to distinguish specific features within the ECG waveform [3] [4] . Wavelets have also been used in electromyography (EMG) [5] [6] , mechanomyography (MMG) [7] , electroencephalography (EEG) [8] [9] , seismocardiography [10] , and other medical applications. Wavelet analysis is widely used to de-noise data and to separate observed components where decomposition, thresh-holding, and reconstruction are computed. Of course, physiological recordings are not messages per se. In communication systems, the original transmitted message is known and can be compared to the received signal; physiologic recordings can only be interpreted based on a set of assumptions regarding the performance of the physiologic system, rather than comparing to a known signal. Accordingly, the appropriate processing algorithm must be identified by correlating the output produced by various analyses to some system characteristic of interest. In our application, we are interested in discovering if a subcomponent of chest wall motion (seismocardiograph recording) can be used to estimate a specific activity of the cardiac muscle, for example, stroke volume. We wish to avoid the time consuming operation of waveform reconstruction, since the application calls for rapid response from a resource limited device.

Wavelet analysis can be viewed as a transformation into the time-frequency domain, and involves a series of convolution operations on the dataset against a particular filter set, called the Mother Wavelet, at various positions and time/magnitude scales. The process separates high frequency components from low frequency components and allows inspection of the data through a small window, in order to detect small features over the full analyzed spectrum [1] [11] [12] . The Mother Wavelet function is often selected based on the shape and characteristic of the feature one is trying to extract. Some functions are better at capturing amplitude and phase changes; others are better at synthesizing data and quantitative information. Dominges et al. [13] and Chourasia et al. [14] show examples where selection of a particular mother wavelet provides better feature extraction than others. Rather than accepting such a trade-off by selecting a single basis set, it should be possible to combine information from multiple mother wavelets. If one has inadequate a-priori understanding of the characteristics which need to be extracted for a particular application there may be advantages in performing multiple full tree decompositions using multiple mother wavelets, and then recombining specific packets to create a hybrid. While encouraging in principle, this approach soon faces the curse of dimensionality; the number of combinations increases factorially. Genetic Algorithms (GAs) have some ability to deal with combinational exploration, so we set out to explore this approach.

Genetic Algorithms (GAs) and Wavelets have been combined recently in image processing for fault detection [15] [16] , voice recognition [17] , and other applications [18] [19] , but these investigations used a binary encoding for packet selection. Our previous investigations [20] have convinced us that a better approach is to incorporate an index representation (genes are the indexes of the features to select from a possibly large pool of features), with a special subset selection crossover operator. This approach has been used in medical imaging, and also genomic and proteomic data mining [21] -[23] , but as far as we know has not been applied in time series data processing. In this case we used multiple filter banks from multiple mother wavelets. Each mother wavelet was used to decompose data to provide a set of filter banks, also known as packets and then a GA was used to evaluate a subset of the filters specified in each chromosome (Figure 1).

An example of where this approach could be utilized is Cardiac Output (CO) monitoring. While various invasive methods have been developed to measure CO directly, all present significant complications, such as blood stream infection, need for medications, decreased hemodynamics, and high cost [24] [25] . CO is defined as the product of Stroke Volume (SV) and Heart Rate (HR), and while HR is a relatively straightforward parameter to asses, SV is much more difficult to accurately assess, and so we have focused on obtaining an accurate non-invasive estimate of SV. We want SV to be measurable continuously for long duration and for the technology to be portable, so that it can be used while exercising, sleeping, or engaging in various activities of daily living.

We seek to estimate SV from a seismocardiogram recording, which is obtained by recording chest wall acceleration at the xiphoid process [26] [27] . Our approach involves performing multi-wavelet decompositions on the acceleration data to generate a large pool of features from which the GA is used to select the best packet combination for predicting SV. The “ground truth” SV is obtained using an electrical impedance based Cardiac

Figure 1. The general approach of using multiple filter banks evaluated by a GA. The input signal is decomposed by multiple mother wavelets producing multiple filter banks, showing in different colors. A chromosome’s genes specify a subset from those filter banks. Each subset is combined to give SV estimation and compared against a “gold standard”.

Output Monitoring device (NICOM, Cheetah Medical Inc). The NICOM has achieved some acceptance [28] -‎ [30] in the health care world.

2. Methods

Eshelman’s CHC GA [31] search engine combined with the MMX crossover operator [32] identifies the best subset genes (i.e. packets) from a multiple filter bank. Since the goal was to minimize the number of genes to avoid over fitting and to reduce the computational costs of SV estimation, a Sub-Set-Size (SSS) variable was defined [20] and added to the chromosome. Figure 2 shows the general CHC pseudo code. The initial popu- lation consists of random chromosomes, with each chromosome consisting of a variable number of genes, which are evaluated using a fitness function. CHC’s selection process, called cross-generational rank selection, differs from many conventional GAs. Each parent chromosome has exactly one mating opportunity each generation, and the resulting offspring replace inferior parents. Mates are randomly selected, but limited due to an incest prevention operator applied before the offspring reproduction crossover operator. There is no mutation per- formed in the “inner loop.”

When it becomes clear that further crossovers are unlikely to advance the search, a soft restart is performed, using mutation to introduce substantial new diversity, but also retaining the best individual chromosome in the population.

2.1. Initial Population and Chromosome Representation

The initial GA population is generated randomly using a uniform distribution. In CHC two initial populations are produced and the chromosomes are evaluated and the more fit chromosomes from both populations are selected to become the next population. For all subsequent generations, the pairs of parents (randomly mated) produce two offspring and the selection operator produces the next parent generation by taking the best from the combined parents and offspring using simple deterministic ranking.

Understanding the chromosome structure provides an understanding of the connection between the feature- genes and the Sub-Set-Size (SSS) gene. A chromosome is defined as set of genes, and in our approach, the first gene represents the SSS, that is, the number of genes that are expressed when a chromosome is evaluated (Figure 3). The SSS gene takes on values between one and the maximum number of genes we allow; it tells the evaluation routine how many of the subsequent genes are to be used in computing the fitness. The remaining genes represent inheritance from a previous generation and may be passed on to future generations, but they do not contribute to the fitness of the chromosome. It is possible that the offspring will express some of the parental “unexpressed” genes because their locations and the SSS will change. This chromosome format was designed by Schaffer et al. [20] and is used by the MMX_SSS crossover operator.

The expressed genes in a chromosome represent the magnitudes of a subset of wavelet packets. The mathematics of the wavelet transform may be found elsewhere [1] [11] [12] ; here we use discreet wavelet transforms. In wavelet transform analysis, the focus is often the low frequency components. The time sequence is separated into two components: low frequency components, called approximations, and high frequency components,

Figure 2. A general CHC flow chart, where survival of the fittest across generations is implemented.

Figure 3. Chromosome structure used by MMX_SSS, where the SSS gene dedicates the number of expressed genes within the chromosome and N is one plus the maximum SSS allowed in a gene.

called details. Subsequent levels of decomposition are performed on the approximation coefficients; again separating the low frequency components into approximations and details. This process is repeated with entropy, energy, and/or a cost function being computed after each level of decomposition as a means of optimizing the decomposition process. In our application, the acceleration data may include numerous high and low frequencies not associated with cardiac activity. High energy at the low frequency is likely to be associated with breathing and whole body motion, while high frequency components may be associated with vocalization. Since our goal was to identify those components providing the best correlation with SV, the full signal frequency spectrum was investigated regardless of its computation cost, energy, or entropy.

We performed full tree decompositions, that is, was performed on the details and approximation coefficients of each branch using one Mother wavelet (Figure 4). This process was repeated for each of the mother wavelets utilized in the analysis. The first decomposition level was performed on the time sequence producing the approximation coefficients and details coefficients. The second decomposition level was performed on the approximation coefficients and the details coefficients, and represented the first Approximation Approximation (AA), the first Approximation Details (AD), the first Details Approximation (DA), and the first Details Details (DD). Another decomposition level can be performed on the AA, AD, DA, and DD, and so on. The last decomposition level consists of set of filters which we call packets and serves as a filter bank. Full tree decomposition is applied with multiple mother wavelets creating multiple filter banks that expand the number of features allowing us to choose combinations of features that correlate best with SV. It may be possible to achieve better correlation with SV by combining packets from different mother wavelets. An ECG signal was used to capture the ventricular contraction time (QRS complex), which serve to identify the time point to evaluate in the decomposed acceleration signal. We performed four decomposition levels with six different mother wavelets providing ninety six different features associated with ventricular contraction acceleration energy.

Figure 4. Example of a two level wavelet tree decomposition, where the second decomposition level consists of four packets, creating a filter bank of four different filters, used as CHC genes.

2.2. Chromosome Evaluation Function

The goal of utilizing the subset selection GA was to identify the minimal subset of features capable of accurately estimating the NICOM reported SVs. The NICOM provides thirty-second averages of SV and so we performed wavelet decomposition on each thirty seconds of recoded acceleration data. Eighty-five thirty-second averaged measurements were taken sequentially using the NICOM, the ECG, and chest accelerations, from a single subject during both resting and exercising. There were five exercise periods for one hundred and fifty seconds at the same intensity and five resting periods of two hundred and seventy seconds. We started to collect data while the subject was at rest, in an upright position for four hundred and fifty seconds. Multivariate regression was used to correlate the expressed chromosome genes ‘packets energy’ to the averaged NICOM SV measurements. The R2 value of the regression line was used as the chromosome fitness value. The higher the R2 value, the better the gene set predicts the NICOM SV.

2.3. Hierarchical Selection Process

In the CHC GA, the more fit chromosomes remain in the population until they are replaced by even more fit offspring. The fitness function returns a two-vector, where one is the R2 value, and the other is the SSS. The vector selection process works by comparing two chromosomes, a parent, A and an offspring B, if R2(A) > R2(B), than A is more fit (and vice versa). However, if R2(A) = R2(B), then the chromosome with the smaller SSS is more fit. If the SSS’s are also equal, the parent is not replaced.

2.4. Crossover

The crossover operator is responsible for offspring reproduction. It consists of three operators: Incest Prevention which decides if the two parents can mate; Index Gene Crossover which is responsible for inheritance of both parents’ genes to the offspring; SSS Recombination crossover which is responsible for setting the SSS gene of the offspring based on both parents’ SSS genes.

2.4.1. Incest Prevention

The crossover operator is applied to each random pair of parents. The first step is to check the pair for incest prevention. Parents who are too closely related are prevented from mating. The distance between two chromosomes is simply the number of unique genes in the leading portion of the chromosomes out to the furthest genes an offspring might inherit (the larger value of SSS genes from the two chromosomes).The initial value for the incest threshold is half of the maximum SSS, but it is decremented whenever a generation occurs in which no offspring survive. When the incest threshold drops to zero, any chromosome may mate with any other, including a clone of itself. The incest threshold dropping to zero is one of the criteria used by CHC for halt and restart decisions. This incest prevention algorithm has been shown to effectively defeat genetic drift [33] . It does this by promoting exploration, allowing only mating among the more divergent chromosomes; as long as this process is successful (offspring survive). Being self-adjusting, it tunes itself to problems of differing difficulties; when more fit offspring are being produced, the threshold remains fixed, it drops only when progress is not occurring.

2.4.2. Index Gene Crossover

GA research has shown that “respect” is an important property for a crossover operator [34] [35] . That is, if the parents share common genes, it is important that the offspring should inherit them. The MMX_SSS operator achieves this by first copying the common genes from the parents to the offspring. However, given that there is selection pressure for smaller SSS gene values, this copy operation moves each gene one position forward, to the left, in the offspring (Figure 5). Thus, if a gene consistently contributes to fitness, it will slowly migrate towards the front of the chromosome, from grandparent, to parent, to child. If a common gene is in the first position adjacent to the SSS gene, it stays in the first position unless there is a common gene immediately following, in which case they switch places. The unique genes from the two parents are randomly inserted into unused chromosome slots in the offspring. These operations allow genes unexpressed in the parents to become expressed in the offspring.

2.4.3. SSS Recombination

The last step in crossover is to set the values for the SSS genes in the offspring. In our representation, the SSS gene is the left-most gene in the chromosome. This operation uses the “blend crossover” or BLX [20] [36] . The SSS gene for each offspring is drawn uniformly randomly from an interval defined by the SSS genes in the parents and their fitness (Figure 6).

The interval is first set to that bounded by the parental values, and then extended by fifty percent in the direction of the more fit parent. In the example illustrated in Figure 6, the parent with the smaller SSS gene value, being the more fit, biases evolution towards smaller SSSs. The opposite circumstance may also occur. In fact, this condition (the more fit parent being the one with the larger SSS), is what determines the limit for the computation of unique genes for incest prevention.

3. Experiments and Results

To evaluate this approach, we performed a series of experiments to test each aspect of the algorithm; these experiments are described in sequential order. All experiments used seismocardiogram data from a single subject obtained at rest and while undergoing mild exercise (light bike pedaling in an upright position with back support).

Figure 5. MMX_SSS crossover operator. The common genes from the two parents are copied one space to the left in the offspring and the other genes are randomly inserted into the offspring. In this example, the first parent common gene 51 switches places first with gene 12 and then gene 87 in the next generation, (offspring one) because all three are common in both parents. Gene 69 from the second parent stays in the first place since gene 41 is not common (offspring two). The rest of the genes, the “unique” genes, are copied to a grab bag, the table on the right in Figure 5. The two offspring randomly pick the genes from this grab bag to fill up the places that are not filled. In this case, the first offspring selects genes 41, 50, 60, and 23, which have a gray background in the table and are underlined within the first chromosome. The second off spring picks the genes with the white background, which are underlined in the second chromosome. Blend crossover sets the SSS gene.

Figure 6. Offspring SSS interval, where parent C1 is more fit than parent C2. N-1 is the maximum allowed subset size.

Four levels of wavelet decomposition were performed on successive thirty-second time intervals. Six mother wavelets were utilized: Daubechies, Symlets, discrete Meyer, Coiflet, Biorthogonal, and reverse Biorthogonal. A “ground truth” SV value was obtained for each thirty-second interval from the NICOM. This produced a data set with 96 features (6 × 16), and a “true” SV for each of the 85 intervals that were measured. We chose to set the maximum value of SSS to 32 assuming the GA could obtain results with a subset much smaller than this. Thus, the chromosome contained 33 genes, one for SSS and 32 packet indexes. For fitness to maximize, we used the R2 from a linear regression of the packets energy to SV. The population size was one hundred, the number of soft restarts was set to ten, with maximum zero accepts (restart condition) set to three.

3.1. Experiment One―Simple Search for Maximum R2

The first experiment was directed toward achieving a maximum R2 value, but showed little evidence of convergence. Figure 7 presents several plots that characterize an experiment. All features appear to have been sampled throughout the run, but evolution was unable to eliminate many features so that a great many features remain in the population throughout the run (upper panel). In the middle panel, we see that within a few generations the population SSS gene has converged to 32 (SSS max) indicating that no smaller value was competitive. In the lower panel, we also see the population rapidly converging on an R2 value at or near 0.988. Thus, the GA was unable to distinguish any features as any better than any others, and so used the maximum number of features it was permitted (32). The GA discovered many combinations of features that were able to predict SV nearly perfectly. In the example experiment shown Figure 7 the soft restarts are clearly seen as the introduction of genetic diversity (upper two panels) and a drop in average and worst population fitness (lower panel). There were 10 soft restarts, as per the control parameter chosen.

3.2. Experiment Two―Finding a “Seeded” Solution

Failure of convergence in experiment one caused us to verify the algorithm. We elected to embed a perfect solution in the data, just to test the algorithm’s ability to discover it. We selected a set of five features and “doc-

Figure 7. Characterization of experiment one. The X axis represents evolution time, either individual chromosome evaluation (upper panel) or generation (middle and lower panels). In the upper panel, the Y axis is the individual features and there is a point for each index that was present in the population. The middle panel shows the SSS gene of all chromosomes within the population of each generation. The bottom plot shows evaluation of the best, worst, and average chromosomes within the population of each generation.

tored” their values so that together they have perfect SV correlation. These features we gave indexes of 4, 31, 67, 80, and 92. (i.e. widely distributed among the pool of features). The “doctored” features emerged as the only genes left in the population after about one hundred generations (Figure 8). The SSS value (middle panel) first rises towards SSS-max as the combinations are sorted out, and then falls to the value of five as selection pressure eliminates chromosomes with more features than the five needed to achieve perfect performance. Figure 9 shows the number of times each feature was sampled over the entire run. The five doctored features were clearly

Figure 8. Results from the second experiment, where the perfect (seeded) solution was found. The GA successfully detects the five features. The upper panel shows that as the number of generation increases the seeded features emerge from the pack. As the number of generation increases the chromosome with the same fitness value but smaller SSS gene survives, as the middle panel shows. A good solution is found at the initialization stage as the lower panel shows.

Figure 9. The “seeded” features are sampled many more times than other features. Vertical lines separate the different mother wavelets.

preferred by evolution, but even the non-doctored features were each sampled several hundred times while the GA sorted through the combinations to locate the good one. Thus, we observed that the algorithm can work as expected when there is one perfect solution among a sea of poor ones.

3.3. Experiment Three―Seeded Solution, All Data Badly Noise Perturbed

We then challenged the algorithm by perturbing the data with Gaussian noise, where each feature is the original value plus twenty percent Gaussian noise. Again we saw the characteristic pattern of convergence failure (Figure 10). Without an easy-to-find superior set of features, the algorithm could only promote the largest possible subset (SSS max) of just about any of the noisy features. Each feature adding a tiny increment to improve the R2 value. We hypothesized that the problem might be the sensitivity of the original algorithm’s hierarchical selection scheme on any difference in the first dimension of fitness (R2), no matter how small. Selection for small subset size was never triggered because ties on R2 virtually never occurred. This feature of our problem makes it different from previous applications of this algorithm that were on classification tasks, where the fitness was usually to reduce classification errors or some similar metric. These errors, being modest discrete integers, often resulted in ties.

3.4. Experiment Four―Seeded Solution, All Data Perturbed with Noise, Reduced R2 Sensitivity

To test the influence of R2 on convergence, we reduced the number of significant digits in the value of R2 reported by the regression to the GA. By setting this to two significant figures, we essentially declared that chromosomes that differ in R2 by less than 0.01 should be considered equivalent, thereby allowing for ties and enabling the second level of the hierarchical fitness selection to kick in. One may also think of this as an admission that an R2 estimated from a sample of cases must of necessity contain a certain amount of noise (sampling noise rather than measurement noise); allowing the GA to over-exploit noise provides no benefit. This strategy resulted in a return of effective performance even though the problem is now more difficult because of the noise perturbation (Figure 11). Correspondingly, it now takes longer to locate the good feature set (Figure 12). Perturbed features 67 and 80 correlate better with SV and so are located earlier in the course of evolution. The fea- tures with weaker connections, 4, 31, and 92, were not included in the final result by the GA. Feature 31 has been sampled more times since it still has good connection to the residual of SV once features 67, and 80 are

Figure 10. A seeded solution is embedded in the dataset, and all data are perturbed with Gaussian noise. Similar to experiment one, the GA fails to converge.

Figure 11. Reducing the precision of R2 results in successful convergence. Smaller SSS is achieved since weak features are eliminated.

Figure 12. The “seeded” features which are strongly connected are again preferred, but (compare to Figure 9) weak connections are eliminated and new connections are observed.

included in the regression. However, other features 21 and 26 (plus their noise) provided better results and were chosen by the GA. The end result provided four genes 21, 26, 67, and 80 with final R2 of about 0.98.

3.5. Experiment Five―Original Data, with Reduced Precision on R2

Having an indication that over-precision was precluding convergence in the presence of noise, we reran the original dataset with R2 reduced to two significant digits. We observed the patterns that indicate successful learning, and this time without the presence of doctored data. Now SSS evolves, first to 22 packets (in the first convergence, and the next eight soft restarts) and finally to 21 and 22 in the last two soft restarts (Figure 13 middle panel). The R2 reached about 0.97 (Figure 13 lower panel), and the best packets can be seen emerging from the chaos (Figure 13 upper panel).

Figure 13. Convergence of original dataset with reduced precision on R2. The SSS converted to twenty one (middle panel) and the best chromosome maintained good correlation (bottom panel).

4. Discussion and Conclusions

The CHC genetic algorithm with the MMX_SSS crossover operator has previously been applied to take out feature selection in bioinformatics classification tasks. We provide evidence that this algorithm may also be applicable to feature subset selection tasks in time series data processing, but the use of a high-precision first fitness metric such as regression R2, seems to require a judicious reduction in significant digits provided to the GA in order to induce ties so that the second metric (SSS) may become active. In classification tasks, ties are common since counts of classification errors have a limited dynamic range. This work seems to show that a tradeoff may be needed between sensitivity to small improvements in accuracy and the desire for small subsets.

We are optimistic that this algorithm can be applied to selecting high-performance, small sets of signal features that can be combined to yield accurate metrics of some signal content, proving the data processing community with a powerful new tool. Finding specific mother wavelet packets that can be combined at the energy level without full waveform reconstruction can enable computationally inexpensive ways to extract information from time series data.

Cite this paper

Ohad Bar SimanTov,J. DavidSchaffer,Kenneth J.McLeod, (2015) Developing an Evolutionary Algorithm to Search for an Optimal Multi-Mother Wavelet Packets Combination. Journal of Biomedical Science and Engineering,08,458-470. doi: 10.4236/jbise.2015.87043

References

  1. 1. Graps, A. (1995) An Introduction to Wavelets. IEEE Computational Science and Engineering, 2, 18 p.

  2. 2. Ruqiang Y. and Gao, R.X. (2009) Tutorial 21 Wavelet Transform: A Mathematical Tool for Non-Stationary Signal Processing in Measurement Science Part 2 in a Series of Tutorials in Instrumentation and Measurement. IEEE Instrumentation & Measurement Magazine, 12, 35-44.

  3. 3. Kadambe, S., Murray, R. and Boudreaux-Bartels, G.F. (1999) Wavelet Transform-Based QRS Complex Detector, IEEE Transactions on Biomedical Engineering, 46, 838-848.
    http://dx.doi.org/10.1109/10.771194

  4. 4. Chen, W., Mo, Z. and Guo, W. (2012) Detection of QRS Complexes Using Wavelet Transforms and Golden Section Search Algorithm. International Journal of Engineering and Advanced Technology (IJEAT), 1, 2249-8958

  5. 5. Mithun, P., Pandey, P.C., Sebastian, T., Mishra, P. and Pandey, V.K. (2011) A Wavelet Based Technique for Suppression of EMG Noise and Motion Artifact in Ambulatory ECG. 33rd Annual International Conference of the IEEE EMBS, 2011, 7087-7090.

  6. 6. Frère, J., Gopfert, B., Slawinski, J. and Tourny-Chollet, C. (2012) Shoulder Muscles Recruitment during a Power Backward Giant Swing on High Bar: A Wavelet-EMG-Analysis. Human Movement Science, 31, 472-485.
    http://dx.doi.org/10.1016/j.humov.2012.02.002

  7. 7. Beck, T.W., Housh, T.J., Fry, A.C., Cramer, J.T., Weir, J.P., Schilling, B.K., Falvo, M.J. and Moore, C.A. (2009) A Wavelet-Based Analysis of Surface Mechanomyographic Signals from the Quadriceps Femoris. Muscle Nerve, 39, 355-363.
    http://dx.doi.org/10.1002/mus.21208

  8. 8. Kannan, S., Dauwels, J. and Ramasubba, R. (2012) Multichannel EEG Compression: Wavelet-Based Image and Volumetric Coding Approach. IEEE Journal of Biomedical and Health Informatics, 17, 113-120.

  9. 9. Nguyen-Ky, T., Wen, P., Li, Y. and Malan, M. (2012) Measuring the Hypnotic Depth of Anaesthesia Based on the EEG Signal Using Combined Wavelet Transform, Eigenvector and Normalisation Techniques. Computers in Biology and Medicine, 42, 680-691.
    http://dx.doi.org/10.1016/j.compbiomed.2012.03.004

  10. 10. Sandham, W., Hamilton, D., Fisher, A., Wei, X. and Conway, M., (1998) Multiresolution Wavelet Decomposition of the Seismocardiogram. IEEE Transactions on Signal Processing, 46, 2541-2543.
    http://dx.doi.org/10.1109/78.709542

  11. 11. Parashar, K. (2013) Discrete Wavelet Transform.
    http://www.thepolygoners.com/tutorials/dwavelet/DWTTut.html

  12. 12. Valens, C. (1999) A Really Friendly Guide to Wavelets, 1999.
    http://math.ecnu.edu.cn/~qgu/friendintro.pdf

  13. 13. Domingues, M.O., Mendes Jr., O. and da Costa A.M. (2005) On Wavelet Techniques in Atmospheric Sciences. Advances in Space Research, 35, 831-842.

  14. 14. Chourasia, V.S. and Mittra, A.K. (2009) Selection of Mother Wavelet and Denoising Algorithm for Analysis of Foetal Phonocardiographic Signals. Journal of Medical Engineering & Technology, 33, 442-448.
    http://dx.doi.org/10.1080/03091900902952618

  15. 15. Heidari, N., Azmi, R. and Pishgoo, B. (2011) Fabric Textile Defect Detection, by Selecting a Suitable Subset of Wavelet Coefficients, through Genetic Algorithm. International Journal of Image Processing, 5, 23-27.

  16. 16. Amjad Ali, S., Vathsal, S. and Lal Kishore, K. (2010) A GA-Based Window Selection Methodology to Enhance Window-Based Multi-Wavelet Transformation and Thresholding Aided CT Image Denoising Technique. International Journal of Computer Science and Information Security, 7, 280-288.

  17. 17. Hosseini, P.T., Almasganj, F., Emami, T., Behroozmand, R., Gharibzade, S. and Torabinezhad, F. (2008) Local Discriminant Wavelet Packet Basis for Voice Pathology Classification. Proceedings of the 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, 16-18 May 2008, 2052-2055.

  18. 18. Jiang, M.Y., Li, C.C., Yuan, D.F. and Lagunas, M.A. (2007) Multiuser Detection Based on Wavelet Packet Modulation and Artificial Fish Swarm Algorithm. Proceedings of the IET Conference on Wireless, Mobile and Sensor Networks, Shanghai, 12-14 December 2007, 117-120.

  19. 19. Jiang, M.Y., Yuan, D.F., Jiang, Z. and Wei, M.M. (2005) Determination of Wavelet Denoising Threshold by PSO and GA. Proceedings of the IEEE International Symposium on Antenna, Propagation and EMC Technologies for Wireless Communications, Beijing, 8-12 August 2005, 1426-1429.
    http://dx.doi.org/10.1109/MAPE.2005.1618192

  20. 20. Schaffer, J.D., Janevski, A. and Simpson, M.R. (2005) A Genetic Algorithm Approach for Discovering Diagnostic Patterns in Molecular Measurement Data. Proceedings of the 2005 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, San Diego, 14-15 November 2005, 1-8.

  21. 21. Punyadeera, C., Schneider, E.M., Schaffer, J.D., Hsin-Yun, H., Joos, T.O., Kriebel, F., Weiss, M. and Verhaegh, W.F.J. (2010) A Biomarker Panel to Discriminate between Systemic Inflammatory Response Syndrome and Sepsis and Sepsis Severity. Journal of Emergencies, Trauma & Shock, 3, 26.
    http://dx.doi.org/10.4103/0974-2700.58666

  22. 22. Boroczky, L., Zhao, L. and Lee, K.P. (2006) Feature Subset Selection for Improving the Performance of False Positive Reduction in Lung Nodule CAD. IEEE Transactions on Information Technology in Biomedicine, 10, 504-511.

  23. 23. Janevski, A., Kamalakaran, S., Banerjee, N., Varadan, V. and Dimitrova, N. (2009) PAPAyA: A Platform for Breast Cancer Biomarker Signature Discovery, Evaluation and Assessment. BMC Bioinformatics, 10, 7-8.
    http://dx.doi.org/10.1186/1471-2105-10-S9-S7

  24. 24. Kac, G., Durain, E., Amrein, C., Hérisson, E., Fiemeyer, A. and Buu-Hoi, A. (2001) Colonization and Infection of Pulmonary Artery Catheter in Cardiac Surgery Patients: Epidemiology and Multivariate Analysis of Risk Factors. Critical Care Medicine, 29, 971-975.
    http://dx.doi.org/10.1097/00003246-200105000-00014

  25. 25. Dalen, J.E. (2001) The Pulmonary Artery Catheter—Friend, Foe, or Accomplice. Journal of the American Medical Association, 286, 348-350.

  26. 26. Salerno, D.M. and Zanetti, J. (1990) Seismocardiography: A New Technique for Recording Cardiac Vibrations. Concept, Method, and Initial Observations. Journal of Cardiovascular Technology, 9, 111-118.

  27. 27. Korzeniowska-Kubacka, I. and Piotrowicz, R. (2002) Seismocardiography—A Noninvasive Technique for Estimating Left Ventricular Function. Preliminary Results. Przeglad Lekarski, 59, 774-776.

  28. 28. Raval, N.Y., Squara, P., Cleman, M., Yalamanchili, K., Winklmaier, M. and Burkhoff, D. (2008) Multicenter Evaluation of Noninvasive Cardiac Output Measurement by Bioreactance Technique. Journal of Clinical Monitoring and Computing, 22, 113-119.
    http://dx.doi.org/10.1007/s10877-008-9112-5

  29. 29. Keren, H., Burkhoff, D. and Squara, P. (2007) Evaluation of a Noninvasive Continuous Cardiac Output Monitoring System Based on Thoracic Bioreactance. AJP: Heart and Circulatory Physiology, 293, H583-H589.
    http://dx.doi.org/10.1152/ajpheart.00195.2007

  30. 30. Squara, P., Denjean, D., Estagnasie, P., Brusset, A., Dib, J.C. and Dubois, C. (2007) Noninvasive Cardiac Output Monitoring (NICOM): A Clinical Validation. Intensive Care Medicine, 33, 1191-1194.
    http://dx.doi.org/10.1007/s00134-007-0640-0

  31. 31. Eshelman, L.J. (1991) The CHC Adaptive Search Algorithm: How to Have Safe Search When Engaging in Nontraditional Genetic Recombination. In: Rawlings, G.J.E., Ed., Foundations of Genetic Algorithms, Morgan Kaufmann, San Francisco, 265-283.

  32. 32. Mathias, K.E., Eshelman, L.J., Schaffer, J.D., Augusteijn, L., Hoogendijk, P. and van de Wiel, R. (2000) Code Compaction Using Genetic Algorithms. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2000), Las Vegas, 10-12 July 2000, 710-717.

  33. 33. Schaffer, J.D., Mani, M., Eshelman, L.J. and Mathias, K. (1998) The Effect of Incest Prevention on Genetic Drift. In: Banzhaf, W. and Reeves, C., Eds., Foundations of Genetic Algorithms, Volume 5, Morgan Kaufmann, San Mateo, 235-243.

  34. 34. Radcliffe, N. (1994) The Algebra of Genetic Algorithms. Annals of Maths and Artificial Intelligence, 10, 339-384.
    http://dx.doi.org/10.1007/BF01531276

  35. 35. Radcliffe, N. (1991) Forma Analysis of Random respectful Recombination. Proceedings of the Fourth International Conference on Genetic Algorithms, San Diego, 13-16 July 1991, 222-229.

  36. 36. Eshelman, L.J. and Schaffer, J.D. (1993) Real-Coded Genetic Algorithms and Interval Schemata. In: Whitley, D., Ed., Foundations of Genetic Algorithms, Volume 2, Morgan Kaufmann, San Mateo, 187-202.
    http://dx.doi.org/10.1016/b978-0-08-094832-4.50018-0