**Advances in Pure Mathematics**

Vol.09 No.03(2019), Article ID:91519,23 pages

10.4236/apm.2019.93010

Deconvolution of the Error Associated with Random Sampling

Peter L. Irwin^{*}, Yiping He, Chin-Yi Chen^{ }

Molecular Characterization of Foodborne Pathogens, United States Department of Agriculture, Wyndmoor, PA, USA

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 11, 2019; Accepted: March 26, 2019; Published: March 29, 2019

ABSTRACT

In this work empirical models describing sampling error (
$\Delta $ ) are reported based upon analytical findings elicited from 3 common probability density functions (PDF): the Gaussian, representing any real-valued, randomly changing variable x of mean
$\mu $ and standard deviation
$\sigma $; the Poisson, representing counting data: i.e., any integral-valued entity’s count of x (cells, clumps of cells or colony forming units, molecules, mutations, etc.) per tested volume, area, length of time, etc. with population mean of
$\mu $ and
$\sigma =\sqrt{\mu}$; binomial data representing the number of successful occurrences of something (
${x}^{+}$ ) out of n observations or sub-samplings. These data were generated in such a way as to simulate what should be observed in practice but avoid other forms of experimental error. Based upon analyses of 10^{4}
$\Delta $ measurements, we show that the average
$\Delta $ (
$\stackrel{\xaf}{\Delta}$ ) is proportional to
$\sigma \cdot \sqrt[-2]{n}\cdot {\mu}^{-1}$ (
${\sigma}_{\stackrel{\xaf}{x}}\cdot {\mu}^{-1}$; Gaussian) or
$\sqrt[-2]{n\cdot \mu}$ (Poisson & binomial). The average proportionality constants associated with these disparate populations were also nearly identical (
$\stackrel{\xaf}{A}=0.783\pm 0.0470$; ±s). However, since
$\sqrt{\mu}=\sigma $ for any Poisson process,
$\sqrt[-2]{n\cdot \mu}={\sigma}_{\stackrel{\xaf}{x}}\cdot {\mu}^{-1}$. In a similar vein, we have empirically demonstrated that binomial-associated
$\stackrel{\xaf}{\Delta}$ were also proportional to
${\sigma}_{\stackrel{\xaf}{x}}\cdot {\mu}^{-1}$. Furthermore, we established that, when all
$\stackrel{\xaf}{\Delta}$ were plotted against either
$\sqrt[-2]{n\cdot \mu}$ or
${\sigma}_{\stackrel{\xaf}{x}}\cdot {\mu}^{-1}$, there was only one relationship with a slope = A (0.767 ± 0.0990) and a near-zero intercept. This latter finding also argues that all
$\stackrel{\xaf}{\Delta}$, regardless of parent PDF, are proportional to
${\sigma}_{\stackrel{\xaf}{x}}\cdot {\mu}^{-1}$ which is the coefficient of variation for a population of sample means (
${C}_{V}\left[\stackrel{\xaf}{x}\right]$ ). Lastly, we establish that the proportionality constant A is equivalent to the coefficient of variation associated with
$\Delta $ (
${C}_{V}\left[{\Delta}_{j}\right]$ ) measurement and, therefore,
$\stackrel{\xaf}{\Delta}={C}_{V}\left[{\Delta}_{j}\right]\cdot {C}_{V}\left[\stackrel{\xaf}{x}\right]$. These results are noteworthy inasmuch as they provide a straightforward empirical link between stochastic sampling error and the aforementioned
${C}_{V}\text{s}$. Finally, we demonstrate that all attendant empirical measures of
$\Delta $ are reasonably small (e.g.,
${s}_{\stackrel{\xaf}{x}}\cdot {\stackrel{\xaf}{x}}^{-1}~4\%$ ) when an environmental microbiome was well-sampled: n = 16 - 18 observations with
$\mu ~3$ isolates per observation. These colony counting results were supported by the fact that the two major isolates’ relative abundance was reproducible in the four most probable composition observations from one common population.

**Keywords:**

Stochastic Sampling Error, Modeling, Most Probable Composition, Quantitative Metagenomics, Food-Borne Bacteria

1. Introduction

There are various analytical procedures for enumerating organisms in environmental samples which diverge in their experimental approach yet are mathematically inter-related. Thus, if V represents the sample volume and ${V}_{e}$ the volume occupied by a test entity of interest (e.g., colony forming units or CFUs), the probability that one particular ${V}_{e}$ will not contain this entity at concentration $\delta $ [1] is

$\left(\frac{V/{V}_{e}-V\cdot \delta}{V/{V}_{e}}\right)=\left(1-{V}_{e}\cdot \delta \right)$;

i.e., $V/{V}_{e}$ ―maximum possible number of entities in V and $V\cdot \delta $ ~the actual number of objects present.

Assuming that many ${V}_{e}$ aliquots have been combined to generate V, the probability that no organism will be contained in V is [1]

${P}^{-}={\left[1-{V}_{e}\cdot \delta \right]}^{\frac{V}{{V}_{e}}}$

therefore

$\mathrm{ln}\left[{P}^{-}\right]=\frac{V}{{V}_{e}}\cdot \mathrm{ln}\left[1-{V}_{e}\cdot \delta \right]$.

Since

$\mathrm{ln}\left[1-\psi \right]~-\psi -\frac{{\psi}^{2}}{2}-\frac{{\psi}^{3}}{3}-\frac{{\psi}^{4}}{4}-\cdots $

then, if $\psi ={V}_{e}\cdot \delta $,

$\begin{array}{c}\mathrm{ln}\left[{P}^{-}\right]~\frac{V}{{V}_{e}}\left(-{V}_{e}\cdot \delta -\frac{{V}_{e}^{2}\cdot {\delta}^{2}}{2}-\frac{{V}_{e}^{3}\cdot {\delta}^{3}}{3}-\frac{{V}_{e}^{4}\cdot {\delta}^{4}}{4}-\cdots \right)\\ ~-V\cdot \delta \left(1+\frac{{V}_{e}\cdot \delta}{2}+\frac{{V}_{e}^{2}\cdot {\delta}^{2}}{3}+\frac{{V}_{e}^{3}\cdot {\delta}^{3}}{4}+\cdots \right).\end{array}$

For ${V}_{e}\to 0$ (e.g., E. coli [2] has a ${V}_{e}~0.6\text{\hspace{0.17em}}\mu {\text{m}}^{3}~6\times {10}^{-13}\text{mL}$ ),

$\mathrm{ln}\left[{P}^{-}\right]~-V\cdot \delta $

${P}^{-}=\mathrm{exp}\left[-V\cdot \delta \right]=\mathrm{exp}\left[-\mu \right]$

therefore

${P}^{+}=1-{P}^{-}=1-\mathrm{exp}\left[-V\cdot \delta \right]=1-\mathrm{exp}\left[-\mu \right]$. (1)

In certain circumstances it is only possible to determine an organism’s $\delta $ by diluting the sample to such an extent that only a fraction of the n “technical” replicates tested are positive ( ${x}^{+}$ ) for the presence of the entity, or microbe, in question [3] [4]. This technique is referred to as the “dilution method” [1] since it involves diluting a test sample’s content to extinction ( $\delta \to 0$ ). This enumeration protocol is also known as the most probable number (MPN) method and entails sampling from a liquid source, making serial dilutions from this, distributing an aliquot of each of these dilutions into separate receptacles, incubating these under suitable growth conditions, and observing if any growth has occurred based upon some organism-specific detection method [5] [6]. The MPN enumeration procedure is particularly useful when sampling from environmental sources, such as foods, since damaged cells frequently recover in liquid media [7].

For example, were one to obtain a food sample containing ~14 CFU of a particular organism per 50 g, the cells would typically be washed from the food matrix, concentrated to a few mL (e.g., via centrifugation), and brought up to some appropriate volume (say 40 mL = V_{sample}) with media [5]. From this, eight 4 mL (V) samples could be randomly selected and distributed into 8 separate receptacles (n = 8 with a dilution factor of 1; i.e., undiluted). Of the remaining 8 mL, 4 could be further diluted with 36 mL (40 mL total) liquid media, mixed and distributed into another set of 8 containers. This set of dilutions has a dilution factor of 0.1 relative to the original. With the remaining 8 mL from the 0.1 dilution, 4 mL could be diluted again with 36 mL media, mixed and distributed into yet another eight 4 mL replicates (dilution factor = 0.01). After incubation the most likely number (Equation (2), below) of positive occurrences (e.g., presence of a specific gene [5] ) observed would be
${x}^{+}$ = 6, 1, and 0 (out of n = 8 observations per dilution) for dilution factors of 1, 0.1, and 0.01, respectively, and the calculated MPN (±s) per 50 g sample would = 13.8 ± 5.56. Note the relatively large error term. For a 4-fold proportional (200 g, 160 mL V_{sample}) experiment with n = 32, the calculated MPN is 13.8 ± 2.78 per 50 g sample.

For MPN-based organism detection and subsequent enumeration, the number of positive occurrences of growth in any j^{th} experiment out of n observations =
${x}_{j}^{+}={\displaystyle {\sum}_{i=1}^{n}{\theta}_{ij}}$ (θ = either 1 [presence] or 0 [absence]) can be estimated as

${x}^{+}~n\cdot {P}^{+}=n\left(1-\mathrm{exp}\left[-V\cdot \delta \right]\right)$ (2)

whereupon ${x}^{+}$ is integral (=ROUND( $n\cdot {P}^{+}$, 0) in Excel). The probability of observing ${x}^{+}$ successes out of n Bernoulli trials [8] each of volume V from a population of $\delta $ entities per V is

${P}_{b}=\frac{n!}{{x}^{+}!\left(n-{x}^{+}\right)!}{\left({P}^{-}\right)}^{n-{x}^{+}}{\left({P}^{+}\right)}^{{x}^{+}}$

which is also known as the binomial PDF. Since $n\cdot {P}^{+}$ = the population average (real) [9] number of positive responses out of n tests ( ${\mu}^{+}$ ), the above can be also written as

${P}_{b}=\frac{n!}{{x}^{+}!\left(n-{x}^{+}\right)!}{\left(1-\frac{{\mu}^{+}}{n}\right)}^{n-{x}^{+}}{\left(\frac{{\mu}^{+}}{n}\right)}^{{x}^{+}}$. (3)

The multiple dilution MPN calculation itself is determined by finding the value of $\delta $ at the maximum in the product of the ${P}_{b}s$ from all ${\mathcal{l}}^{th}$ dilutions ( $\prod}_{\mathcal{l}}{P}_{b,\mathcal{l}$ ) and is easily achieved by adding the scaled sum of all dilutions’ ${\partial}_{\delta}{P}_{b}\xf7{P}_{b}$ values to an initial guess for $\delta $ (i.e.,

$\begin{array}{l}{\delta}_{m+1}={\delta}_{m}+{\lambda}_{m}\times {\displaystyle {\sum}_{\mathcal{l}}{\left\{{\partial}_{\delta}{P}_{b,\mathcal{l}}\xf7{P}_{b,\mathcal{l}}\right\}}_{m}}\\ ={\delta}_{m}+{\lambda}_{m}\times {\displaystyle {\sum}_{\mathcal{l}}\left\{\left({x}_{\mathcal{l}}^{+}-n+\left({x}_{\mathcal{l}}^{+}\xf7\left(\mathrm{exp}\left[V\cdot {\delta}_{m}\cdot {0.1}^{\mathcal{l}}\right]-1\right)\right)\right)\cdot V\cdot {0.1}^{\mathcal{l}}\right\}}\end{array}$ for any particular

ℓ^{th} one-to-ten dilution and m iterations; λ is a monotonically changing, with m, scaling function) then solving for the MPN recursively [1] [4] [5] [10] which minimizes the summation.

At the limit n → ∞, Equation (3) simplifies to what is known as the Poisson PDF

${P}_{P}=\frac{{\mu}^{x}\mathrm{exp}\left[-\mu \right]}{x!}$. (4)

Under these circumstances, x is the observed and $\mu $ is the population average number of counts in/on the tested volume, surface, chosen time period, etc. This PDF is applicable to all analytical systems involving, essentially, the counting of objects. However this PDF is applied, the most conspicuous aspect [11] [12] of any Poisson process is that the variance ( ${\sigma}^{2}$ or second moment)

${\sigma}^{2}={\displaystyle \underset{x=0}{\overset{\infty}{\sum}}{\left(x-\mu \right)}^{2}{P}_{P}}=\mu $

equals the population mean ( $\mu $ or first moment)

$\mu ={\displaystyle \underset{x=0}{\overset{\infty}{\sum}}x\cdot {P}_{P}}$.

The last probability density function utilized in this stochastic sampling exercise is also related to ${P}_{b}$, Equation (3). This is the Gaussian PDF which we use to quantitatively examine the effects of n and $\sigma $ (fixed $\mu $ ) on the variability of sample means ( $\stackrel{\xaf}{x}$ ) which have been created by randomly sampling from a population of real-valued variables (x; e.g., doubling time [13] ) which are normally distributed as

${P}_{G}=\frac{\text{Area}}{\sigma \sqrt{\text{2\pi}}}\mathrm{exp}\left[-\frac{1}{2}{\left(\frac{x-\mu}{\sigma}\right)}^{2}\right]$; (5)

in this relationship the Area term (~
$\Delta x\cdot {\displaystyle {\sum}_{k=1}^{K}{f}_{k}}$; for large K) is the approximate area under the fitting function f (frequently taken to be 1 since
$\Delta x$ is often = 1 and
$\sum f$ is always ~1). There are several derivations of P_{G} but none are as persuasive as the fact that this PDF is simple and has been experimentally shown to be the most likely probability distribution associated with most experimental observations [9] [12].

The original purpose of our sampling-related investigations [7] was to estimate a nominal value for n needed to achieve accurate most probable foodborne bacterial isolate enumeration, combined with 16S rDNA-based identification, for quantitative metagenomic purposes. The relationships were developed by examining the results of 6 × 6 colony counting (Poisson PDF) of highly diluted bacteria [14] [15] as a function of n and $\mu $ as well as by generating counts (x) derived from ${P}_{P}$ to simulate what occurred in the lab [15] [16] but which avoided other forms of experimentally based error [5]. We were able to establish that ${n}_{\mathrm{min}}={n}_{\mu \to 1}\xf7\sqrt[3]{\mu}$ where ${n}_{\mu \to 1}$ is the number of observations necessary to accurately enumerate a population average of 1 count per volume tested. Based mainly on colony counting experience we estimate ${n}_{\mu \to 1}$ is somewhere in the range n ~ 20 - 30 observations.

Herein we model stochastic sampling errors associated with all the aforementioned PDFs and empirically demonstrate that the resultant mathematical models are, in part, a consequence of the “central limit theorem” [17] (CLT). In general, the CLT states that a distribution of sample means ( $\stackrel{\xaf}{x}$ ), regardless of parent PDF, approaches a normal distribution analytically equivalent to ${P}_{G}$, Equation (5), with $x=\stackrel{\xaf}{x}$, $\mu ={\mu}_{\stackrel{\xaf}{x}}$, and with the ${\sigma}^{2}$ term = ${\sigma}_{\stackrel{\xaf}{x}}^{2}$ (= ${\sigma}^{2}\xf7n$ ) as the number of separate n-samplings increases. We also have elaborated on empirical findings developed previously [5] [15] [16] for predicting errors associated with the random sampling of microorganisms as well as comparing the internal variations associated with the three different sampling error data types derived from the Gaussian, binomial (MPN), and Poisson relationships. Thus, new results have been created using the aforementioned probability distributions, Equations (2), (4), and (5), and have been highly replicated since each “experiment”, comprising n (= 3, 6, 9, 12, or 24) observations, were repeated 100 times.

2. Materials and Methods

2.1. Poisson-Based Data: Equation (4), Figure 1

All counting data were created by multiplying Equation (4) by 360 in order to produce a large number of integral-valued repeats (=ROUND ( $360\cdot {P}_{P}$, 0)) for any particular count x: e.g., for $\mu =1$ particle per test volume, area, length of time, etc., there would be, most probably, 132 repeats of x = 0, 132 repeats of x = 1, 66 repeats of x = 2, 22 repeats of x = 3, 6 repeats of x = 4 and 1 repeat of x = 5 entities per test. From this pool of 360 counts for each μ, an n number of x values were randomly selected based upon random number tables created with Mathematica.

$\text{Table}\left[i=\text{Random}\left[\text{Integer,}\left\{\text{1,360}\right\}\right]\text{,}\left\{i\text{,}n\right\}\right]$ (6)

Figure 1. (A) relationship of average ${\Delta}_{j}$ ( $\stackrel{\xaf}{\Delta}$ ) for Poisson-based data using Equation (7) (P-I: black symbols and curves) or Equation (8) (P-II: red symbols and curves) as a function of n (= 3, 6, 9, 12, 24) and various values for μ (= 1, 2, 4, 8, 16). Gauss-Newton least squares minimization-based curve-fitting [18] of data was performed [19] to fit to the equation $\stackrel{\xaf}{\Delta}={\Delta}_{n}\cdot {n}^{\stackrel{\xaf}{a}}$ (averages for a are provided ± s; averaged across 5× μ). (B) Non-linear relationship of individual ${\Delta}_{n}$ values from (A) for P-I- and II-based data as a function of μ whereupon curve-fitting of data was also performed using the algebraic form ${\Delta}_{n}=A\cdot {\mu}^{a}$ (values for A and a are provided ± ASE). (C) and (D) Present linearized forms ( $X=\sqrt[-2]{n}$ in (C) and $X=\sqrt[-2]{\mu}$ in (D)) of data reported in Figure 1(A) and Figure 1(B) based upon all values of a = −1/2. Slopes of the lines in Figure 1(C) and Figure 1(D) are equivalent to ${\Delta}_{n}$ and A, respectively.

which generates n random numbers between 1 and 360. Thus, 100 such random number sets were utilized for the twenty-five n (= 3, 6, 9, 12, 24) × μ (= 1, 2, 4, 8, 16) combinations. Briefly, each procedure involved arranging the aforementioned 360 x values (one set for each μ) in one column of a spreadsheet followed by filling in n adjacent columns with formulae which refer to the calculated x values but where each row’s reference number was taken from the Mathematica-generated random number, Equation (6), next in sequence. MPN- and Gaussian-based data arrays were treated in an identical fashion. The formula (P-I: normalized deviations of ${s}_{j}$ from $\sigma =\sqrt{\mu}$ ) for calculating our empirical measure of Poisson stochastic sampling error (Δ) was

${\Delta}_{j}=\frac{\left|\sqrt{\mu}-{s}_{j}\right|}{\mu}$ (7)

whereupon the
${s}_{j}$ term is the experimental standard deviation (
$\sqrt{{\left(n-1\right)}^{-1}{\displaystyle {\sum}_{i=1}^{n}{\left({x}_{ij}-{\stackrel{\xaf}{x}}_{j}\right)}^{2}}}$ or “=STDEV.S (
${x}_{ij}\text{-array}$ )” in Excel) for each j^{ }^{th}

(
$j=1,2,\cdots ,J$; J = 100) experiment and i^{ }^{th} (
$i=1,2,\cdots ,n$ ) x. The average across

100× experiments, regardless of formulation, were symbolized as $\stackrel{\xaf}{\Delta}$ (= ${J}^{-1}\cdot {\displaystyle {\sum}_{j=1}^{J}{\Delta}_{j}}$ or “=AVERAGE ( ${\Delta}_{j}\text{-array}$ )”). A second form for the Poisson-based measure of Δ was also calculated (P-II: normalized deviations of ${\stackrel{\xaf}{x}}_{j}$ from known μ) from these same data

${\Delta}_{j}=\frac{\left|\mu -{\stackrel{\xaf}{x}}_{j}\right|}{\mu}$. (8)

Here the
${\stackrel{\xaf}{x}}_{j}$ is the observed arithmetic mean for each j^{ }^{th} counting experiment.

2.2. MPN Experiments: Equation (1), Figure 2

All MPN data were created by multiplying Equation (1) by 360 to produce the number (“=ROUND ( $360\cdot {P}^{+}$, 0)”) of positive responses (θ = 1) for any particular level of $V\cdot \delta $ (=μ); e.g., for $\mu =0.1$ entity per volume tested there would be 34 repeats of θ = 1 and 326 repeats of θ = 0. From such a column of 360 θ values (one column for each μ), n were randomly selected based upon Mathematica tables, Equation (6), and treated similar to the Poisson data above. Thus, for each combination of n (= 3, 6, 9, 12, or 24) × μ (= 0.1, 0.2, 0.4, 0.8, 1.6), 100

Figure 2. (A) Relationship of average ${\Delta}_{j}$ ( $\stackrel{\xaf}{\Delta}$ ) for MPN-based data using Equation (9) as a function of n (= 3, 6, 9, 12, or 24) and variable μ (= 0.1, 0.2, 0.4, 0.8, 1.6). Gauss-Newton least squares minimization-based curve-fitting [18] of data was performed [19] to fit the algebraic form $\stackrel{\xaf}{\Delta}={\Delta}_{n}\cdot {n}^{\stackrel{\xaf}{a}}$ (averages for a are provided ± s; averaged across 5 × μ) to these results. (B) Relationship of individual ${\Delta}_{n}$ values from (A) for MPN-based data as a function of μ where curve-fitting of data was performed also to the algebraic form ${\Delta}_{n}=A\cdot {\mu}^{a}$ (values for A and a are provided ± ASE). (C) and (D) Represent linearized forms ( $X=\sqrt[-2]{n}$ in (C) and $X=\sqrt[-2]{\mu}$ in (D) of data reported in Figure 2(A) and Figure 2(B) based upon the assumption that a = −1/2. Slopes of the lines in Figure 2(C) and Figure 2(D) are equivalent to ${\Delta}_{n}$ and A, respectively.

random n-selections were performed. The formula for calculating our empirical measure of MPN sampling error was

${\Delta}_{j}=\frac{\left|n\cdot {P}^{+}-{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\theta}_{ij}}\right|}{n\cdot {P}^{+}}=\frac{\left|{\mu}^{+}-{x}_{j}^{+}\right|}{{\mu}^{+}}$; (9)

where θ = either a “1” (a positive occurrence) or a “0” (a negative occurrence). As before, the average ${\Delta}_{j}$ across J = 100 experiments (each of n observations) = $\stackrel{\xaf}{\Delta}$. The MPN value for ${\stackrel{\xaf}{x}}_{j}^{+}=\mathrm{ln}\left[n\xf7\left(n-{x}_{j}^{+}\right)\right]$ and provides the average MPN or CFU per sample; a rearrangement of Equation (2).

2.3. Gaussian-Based Data: Equation (5), Figure 3

All Gaussian PDF data were produced by multiplying Equation (5) ( $\Delta x=1$ ) by 360 producing an integral number of observations (“=ROUND ( $360\cdot {P}_{G}$, 0)”) for each value of x as a function of μ (fixed at 20) and σ (= 1, 1.5, 2, 3, 4). For instance, for σ = 1 there would be 2 repeats of x = 17, 19 repeats of x = 18, 87 repeats of x = 19, 144 repeats of x = 20, 87 repeats of x = 21, 19 repeats of x = 22, and 2 repeats of x = 23. From this column of 360 values of x, n (= 3, 6, 9, 12, or

Figure 3. (A) Relationship of average ${\Delta}_{j}$ ( $\stackrel{\xaf}{\Delta}$ ) for Gaussian-based data using Equation (10) as a function of n with variable σ (=1, 1.5, 2, 3, 4; μ = 20). Gauss-Newton least squares minimization-based curve-fitting [18] of data was performed [19] to fit the equation $\stackrel{\xaf}{\Delta}={\Delta}_{n}\cdot {n}^{\stackrel{\xaf}{a}}$ (averages for a are provide ± s; averaged across 5× σ) to these results. (B) and (D) Relationship of individual ${\Delta}_{n}$ values from (A) and (C) for Gaussian-based data as a function of μ-normalized standard deviations ( $X=\sigma \xf7\mu $ ). Linear regression-based fitting of data was performed to the algebraic form $A\cdot \sigma \xf7\mu $. Figure 3(C): linearized forms ( $X=\sqrt[-2]{n}$ ) of data reported in Figure 3(A) based on a = −1/2. Slopes of the lines in Figure 3(C) are equivalent to ${\Delta}_{n}$ and plotted in Figure 3(D).

24) were randomly selected based upon Equation (6) and treated identically to the Poisson and MPN data sets. Thus, for each combination of n × σ 100× n-based selections were performed. The formula for calculating our empirical measure of Gaussian sampling error, similar to Equation (7), was

${\Delta}_{j}=\frac{\left|\sigma -{s}_{j}\right|}{\mu}$. (10)

As usual, the average ${\Delta}_{j}$ across J = 100 such sets of experiments each of n observations = $\stackrel{\xaf}{\Delta}$.

2.4. Other Calculations

All curve-fitting was based upon a modified Gauss-Newton algorithm by least squares [18] minimization performed on a Microsoft Excel spreadsheet: [19] some of these results were fit to the algebraic form $f\left[X\right]=\text{constant}\cdot {X}^{a}$. However, certain MPN data ( ${\stackrel{\xaf}{x}}^{+}$ and ${x}^{+}$ ) were also fit to a Gaussian (Equation (5): ${P}_{G}\left[{\stackrel{\xaf}{x}}^{+}\right]$ or ${P}_{G}\left[{x}^{+}\right]$ ) with $\Delta x$ used as one of the parameters to be iteratively resolved (i.e., deconvolved). Where appropriate, confidence limits (CL) have been calculated using an approach applicable to any hypothetical fitting function ${f}_{k}=f\left[{X}_{k};{\pi}_{p}\right]$ : $k=1,2,\cdots ,K$ rows of the observed X-Y data sets with up to P (typically ≤ 3) fitting parameters ${\pi}_{p}$ ( $p=1,2,\cdots ,P$ ). In this procedure we use the propagation of error method [9] [20] for estimating the standard error associated with each ${f}_{k}$ ( ${s}_{{f}_{k}}$; illustrated below for P = 2 fitting parameters) data point

$CL=t\cdot {s}_{{f}_{k}}=t\sqrt{{s}_{{\pi}_{1}}^{2}{\left[{\partial}_{{\pi}_{1}}{f}_{k}\right]}^{2}+\text{}{s}_{{\pi}_{2}}^{2}{\left[{\partial}_{{\pi}_{2}}{f}_{k}\right]}^{2}+2\cdot {s}_{{\pi}_{1}{\pi}_{2}}^{2}\cdot {\partial}_{{\pi}_{1}}{f}_{k}\cdot {\partial}_{{\pi}_{2}}{f}_{k}}$

where, for any particular fitting parameter $\omega $, ${s}_{\omega}=\sqrt{{s}_{Y}^{2}\cdot {\left[{Z}^{\text{T}}Z\right]}_{\omega \omega}^{-1}}$ = “asymptotic standard error” [19] (ASE; ${s}_{Y}^{2}$ = residual sum of squares ÷ [K − P]), and the ${\partial}_{{\pi}_{p}}{f}_{k}$ terms symbolize $\partial {f}_{k}/\partial {\pi}_{p}$. The above equation simplifies to

$CL={t}_{0.01}\cdot {s}_{{f}_{k}}={t}_{0.01}\sqrt{{s}_{Y}^{2}\left({Z}_{k}{\left[{Z}^{\text{T}}Z\right]}^{\text{\hspace{0.17em}}-1}{Z}_{k}^{\text{T}}\right)}$.

In all the above relationships Z is the partial first derivative matrix of ${f}_{k}$ with respect to the parameters ${\pi}_{1}$ and ${\pi}_{2}$ (i.e., a 2-parameter fit) such that

$Z=\left[\begin{array}{cc}{\partial}_{{\pi}_{1}}{f}_{1}& {\partial}_{{\pi}_{2}}{f}_{1}\\ {\partial}_{{\pi}_{1}}{f}_{2}& {\partial}_{{\pi}_{2}}{f}_{2}\\ \vdots & \vdots \\ {\partial}_{{\pi}_{1}}{f}_{K}& {\partial}_{{\pi}_{2}}{f}_{K}\end{array}\right]$,

${Z}^{\text{T}}$ is the transpose of Z, ${Z}_{k}=\left[\begin{array}{cc}{\partial}_{{\pi}_{1}}{f}_{k}& {\partial}_{{\pi}_{2}}{f}_{k}\end{array}\right]$ (K row vectors), and ${s}_{Y}^{2}\cdot {\left[{Z}^{\text{T}}Z\right]}^{-1}$ is the variance-covariance matrix [21]. CL were not used for all results since they might have muddled analytical aspects of the compositions.

2.5. Microbiome Sampling Data

For the food microbiome sampling experiment ~25 g of commercial, pre-thawed (~15 min at room temperature), frozen vegetables were washed with a volume of phosphate buffered saline (PBS; 10 mM Na_{2}HPO_{4} + 2 mM NaH_{2}PO_{4} + 137 mM NaCl; pH 7.4 ± 0.2; Boston BioProducts, 159 Chestnut Street, Ashland, MA 01721) equivalent to double the mass of the sample. In order to assist in the detachment of plant tissue-bound cells, 0.075% [w/v] Tween-20 (Sigma-Aldrich, 3050 Spruce St., St. Louis, MO 63103) was added to the PBS and filter sterilized. All washing was performed in sanitized plastic zip-lock bags wherein the formerly frozen vegetables and buffer wash were gently agitated at 80 rpm for approximately 20 min and immediately passed through a 40 μm nylon filter (BD Falcon; Becton Dickinson Biosciences, Bedford, MA) to remove large particles.

Directly sampled washes (5 mL Control = Observation I [cultured at 30˚C] and III [cultured at 37˚C]) as well as hollow fiber microfilter-concentrated (each 5 mL sample was diluted to ~100 mL PBS + Tween, concentrated, then washed with another 100 mL buffer, and eluted with ~5 mLs PBS + Tween = Observation II [cultured at 30˚C] and IV [cultured at 37˚C]) samples were collected and enumerated using the 6 × 6 drop plate method [14] but using 1:2 serial dilutions for colony selection on Brain Heart Infusion agar (BHI + 2% [w/v] agar). Briefly, this drop plate method involved loading 400 μL of each wash (either control or concentrated samples brought back to the control sample’s original volume = 5 mL) filtrate into the first well (row A) of a 96-well microtiter plate. Two-fold serial dilutions were made by transferring 200 μL (multichannel pipette, Rainin, Emeryville, CA) from the first row (row A; dilution 0) into 200 μL of diluent (PBS) in the 2nd row (row B; dilution 1), mixing 10 times while continuously stirring, and repeating the process until five 1:2 dilutions were produced; pipette tips were changed between dilutions. Based on a previous analysis of 6 × 6 drop plate sampling error [15] , we sampled n = 16 - 18 seven μL volumes from each of the 6 dilutions (dilutions 0 - 5; overall dilution factors of 0.5^{0} = 1 to 0.5^{5} = 0.03125) and drop-plated these onto BHI agar media using a multichannel pipette. After plating, the droplets were allowed to dry, inverted and then incubated at two temperatures (either 30˚C or 37˚C; 3 plates for each temperature and treatment combination). Colonies were counted after 16 - 24 hours. Colony collection for our 16S rDNA bacterial identification protocol [7] involved selecting all colonies from dilution 2 (0.5^{2} = 0.25 dilution;
$\stackrel{\xaf}{x}$ = 2.79 ± 1.52 colonies per drop; ± s; the fact that
$\sqrt{\stackrel{\xaf}{x}}$ = 1.67 ~ s might argue for an appropriately sampled population).

Each colony (n total) was carefully removed from the agar plate’s surface using a Rainin L20 tip, dispersed into 200 μL BHI in a 96-well plate and incubated at 30˚C for 16 - 24 hours. These cultures were restreaked onto solid media and incubated at 30˚C overnight. One colony from each of the original n plates was selected, suspended into 25 μL of Ultra PrepMan (Applied Biosystems, Foster City, CA) in a PCR tube and heated in a thermocycler at 99˚C for 15 min. Upon cooling, samples were centrifuged 10 min. to separate the DNA solution from the cell debris. A sample of supernatant was transferred to a new tube for the DNA amplification step (end-point PCR). Once the 16S rRNA “gene” amplification, sequencing reactions (EubA and EubB primers) and Sanger sequencing were performed, DNA sequences were edited, and contigs assembled using Sequencher software as explained in detail previously [7].

3. Results and Discussion

Figure 1 shows results related to averages of 100 × ${\Delta}_{j}$ values ( $\stackrel{\xaf}{\Delta}$ ) derived from Equations (7) (P-I, black data) or (8) (P-II, red data) as a function of n (Figure 1(A)) and μ (Figure 1(B)). The least squares curve-fitting results show that the Figure 1(A) data follow the general form $\stackrel{\xaf}{\Delta}={\Delta}_{n}\cdot {n}^{\stackrel{\xaf}{a}}$ whereupon $\stackrel{\xaf}{a}$ (averaged across 5 n-based fits) = −0.556 ± 0.00986 (black data sets; ±s) or $\stackrel{\xaf}{a}=-0.529\pm 0.0387$ (red data). These findings suggest that $\stackrel{\xaf}{\Delta}$ changes as the inverse square root of n for all values of μ. Figure 1(C) displays these same results on a linearized scale (X-axis = $\sqrt[-2]{n}$ ) whereupon the slopes $\left(\partial \stackrel{\xaf}{\Delta}/\partial \left[\sqrt[-2]{n}\right]\right)~{\Delta}_{n}$. Figure 1(B) illustrates that the ${\Delta}_{n}$ values derived from Figure 1(A) non-linear regression change as the inverse square root of μ: i.e., ${\Delta}_{n}=A\cdot {\mu}^{a}$ where a = −0.547 ± 0.0179 (black data) or −0.503 ± 0.0374 (red data); a ± ASE. Figure 1(D) shows Figure 1(B) results plotted on an appropriately linearized scale (X-axis = $\sqrt[-2]{\mu}$ ) as indicated by the above analysis whereupon the slope $\left(\partial {\Delta}_{n}/\partial \left[\sqrt[-2]{\mu}\right]\right)~A$. Combining results from Figure 1(A) and Figure 1(B) we see that $\stackrel{\xaf}{\Delta}~A\cdot \sqrt[-2]{n\cdot \mu}$. The average value for A was 0.804 ± 0.0460 (P-I & P-II curve-fitting results ±s).

Figure 2 displays MPN-based enumeration data, Equation (9), manipulated in a similar fashion as that of the above Poisson-based results with a nearly identical result. The least squares curve-fitting shows that the data in Figure 2(A) once again follow the general form $\stackrel{\xaf}{\Delta}={\Delta}_{n}\cdot {n}^{\stackrel{\xaf}{a}}$ with $\stackrel{\xaf}{a}$ = −0.554 ± 0.0499 (±s) which is the average a from 5× μ-based data sets. Figure 2(C) shows these same findings graphed on a linearized scale ( $X=\sqrt[-2]{n}$ ) whereupon the slopes = ${\Delta}_{n}$. Figure 2(B) also shows that the ${\Delta}_{n}$ values, derived from Figure 2(A) non-linear regression, change as the inverse square root of μ: ${\Delta}_{n}=A\cdot {\mu}^{a}$ where a = −0.515 ± 0.0910 (±ASE). As previously observed, when these results are presented on a linearized scale ( $X=\sqrt[-2]{\mu}$; Figure 2(D)) the slope is equivalent to the parameter A. Combining fitting results from Figure 2(A) and Figure 2(B) we again note that $\stackrel{\xaf}{\Delta}~A\cdot \sqrt[-2]{n\cdot \mu}$ (A = 0.807 ± 0.139; ± ASE).

Completely homologous relationships to the Poisson and MPN findings were also noted with Gaussian-based data (Figure 3) whereupon the least squares curve-fitting in Figure 3(A) shows that these data obey, again, the general form $\stackrel{\xaf}{\Delta}={\Delta}_{n}\cdot {n}^{\stackrel{\xaf}{a}}$ whereupon $\stackrel{\xaf}{a}$ = −0.561 ± 0.0276 (±s; averaged across all σ since μ was fixed). Figure 3(C) has these same findings plotted on a linear scale ( $X=\sqrt[-2]{\mu}$ ) where the slopes = ${\Delta}_{n}$. Figure 3(B) and Figure 3(D) also show that the ${\Delta}_{n}$ values derived from Figure 3(A) and Figure 3(C) non-linear regression change linearly with $\sigma \xf7\mu $ : i.e., ${\Delta}_{n}=A\cdot \sigma \xf7\mu $ (A = 0.725 ± 0.0977; ± ASE). All Gaussian-based data fitting results combined indicate that $\stackrel{\xaf}{\Delta}=A\cdot \sigma \cdot \sqrt[-2]{n}\cdot {\mu}^{-1}=A\cdot {\sigma}_{\stackrel{\xaf}{x}}\cdot {\mu}^{-1}=A\cdot {C}_{V}\left[\stackrel{\xaf}{x}\right]$ whereupon ${C}_{V}\left[\stackrel{\xaf}{x}\right]$ is the coefficient of variation for a population of means associated with x.

3.1. Equivalence of Sampling Errors Associated with Any PDF

The counting results alluded to above (P-I, P-II, & MPN) are similar to those observed previously: [5] [15] [16] i.e., stochastic sampling errors associated with microbiological colony counting and MPN data are proportional to the inverse square root of n × μ. Also, the Poisson population-based results compare favorably with those obtained from actual colony counting experiments [14]. Thus, for all Poisson-based data (Figure 1)

$\stackrel{\xaf}{\Delta}\propto \frac{1}{\sqrt{n\cdot \mu}}=\frac{1}{\sqrt{n}}\cdot \frac{\sqrt{\mu}}{\mu}=\frac{\sigma}{\sqrt{n}}\cdot \frac{1}{\mu}=\frac{{\sigma}_{\stackrel{\xaf}{x}}}{\mu}={C}_{V}\left[\stackrel{\xaf}{x}\right]$ (11)

because $\sigma =\sqrt{\mu}$. We have simplified the expression by utilizing the term ${\sigma}_{\stackrel{\xaf}{x}}$ [22] (= $\sigma \xf7\sqrt{n}$ ) which can be derived using the propagation of errors method [20]. Such nomenclature exemplifies the utilization of ${P}_{G}$, as an approximation for ${P}_{P}$, associated with a population of sample means ( $\stackrel{\xaf}{x}$ ) of mean ${\mu}_{\stackrel{\xaf}{x}}$ and standard deviation ${\sigma}_{\stackrel{\xaf}{x}}$. However, for MPN results, does $\sigma ~\sqrt{\mu}$ as an approximation? This question is addressed in detail (Figures 4-6).

Figure 4. (A) & (C) Frequency of observing each set of MPN-based calculated number of entities per sample tested (
${\stackrel{\xaf}{x}}^{+}=\mathrm{ln}\left[n\xf7\left(n-{x}^{+}\right)\right]$; μ = 0.8 for (A) & (B); μ = 0.4 for (C) & (D) fit to Equation (5) (i.e.,
${P}_{G}\left[{\stackrel{\xaf}{x}}^{+}\right]$ as a function of
${\stackrel{\xaf}{x}}^{+}$ ). (B) and (D) shows that
${\sigma}_{fit}~\sqrt{{\mu}_{fit}}\xf7\sqrt{n}~{\sigma}_{\stackrel{\xaf}{x}}$ (i.e., for MPN,
$\sigma =\sqrt{\mu}$ ) for all modeled n-samplings. Error bars are = t_{0.05} × the experimental (overall)
${s}_{\stackrel{\xaf}{x}}=\sqrt{EMS\xf7n}$.

Figure 5. (A) & (B) Frequency of observing each set of MPN-based number of positive counts ${x}^{+}$ tested: μ = 0.8 and n = 3, 6, 9, 12, 24; (A) data points [red] = frequency of observed ${x}^{+}$, (B) data points [blue] = calculated frequency of ${x}^{+}$ using Mathematica ${P}_{b}\left[{x}^{+}\right]=\text{Table}\left[N\left[\frac{{\left({\text{e}}^{-\mu}\right)}^{n-{x}^{+}}{\left(1-{\text{e}}^{-\mu}\right)}^{{x}^{+}}n!}{\left(n-{x}^{+}\right)!{x}^{+}!}\right],\left\{{x}^{+},0,n+1\right\}\right]$ from ${P}_{b}$, Equation (3), fit to a Gaussian probability distribution: e.g., ${P}_{G}\left[{x}^{+}\right]$, Equation (5). (C) Demonstrates that ${\sigma}_{fit}^{+}\propto \sqrt{{\mu}_{fit}^{+}}$. Linear fit showing slope (A) and intercept (I) ± ASE. The non-linear fits were ${\sigma}_{fit}^{+}=A\cdot {\left({\mu}_{fit}^{+}\right)}^{0.550\pm 0.0463}$. Best fit curves shown ± P = 0.05 CL.

Figure 6. Demonstration that $\text{d}\stackrel{\xaf}{\Delta}/\text{d}\left({\sigma}_{\stackrel{\xaf}{x}}/\mu \right)~A~\text{d}\stackrel{\xaf}{\Delta}/\text{d}\left(\sqrt[-2]{n\cdot \mu}\right)$. All data are plotted ± P = 0.001 CL. (A) is related to P-I data (A = 0.741 ± 0.0203; ± ASE). (B) is related to P-II data (A = 0.827 ± 0.0133). (C) is related to MPN data (A = 0.861 ± 0.0273). (D) is related to Gaussian data (A = 0.637 ± 0.0280). All data are merged in (E): slope of this relationship which involves all three PDFs is 0.767 ± 0.0990.

In Figure 4(A) and Figure 4(C), we have examined some of our MPN data (μ = 0.8 per sample in Figure 4(A) and μ = 0.4 per sample in Figure 4(C) at the various levels of n-sampling) by converting the total number of positive occurrences ( ${x}_{j}^{+}$ ) in n observations to the most probable number of entities in the hypothetical sampled aliquot ( ${\stackrel{\xaf}{x}}_{j}^{+}=\mathrm{ln}\left[n/\left(n-{x}_{j}^{+}\right)\right]$ ) and curve-fit the frequency of occurrence of each ${\stackrel{\xaf}{x}}_{j}^{+}$ to Gaussian PDFs (Equation (5); ${P}_{G}\left[{\stackrel{\xaf}{x}}^{+}\right]$ ). From these curve fits we extracted the parameters ${\sigma}_{fit}$ and ${\mu}_{fit}$. In Figure 4(B) and Figure 4(D) we show that the average ${\sigma}_{fit}\sqrt{n}~\sqrt{{\mu}_{fit}}$ (i.e., ${\sigma}_{fit}=\sqrt{{\mu}_{fit}}\xf7\sqrt{n}={\sigma}_{\stackrel{\xaf}{x}}$ ) and, therefore, $\sigma =\sqrt{\mu}$. This finding indicates that Equation (11) can be applied to both Poisson and MPN results as a reasonable approximation. We have confirmed the MPN results in Figure 2 and Figure 4 by showing that the frequency distribution of ${x}^{+}$ which we have observed in these experiments closely follows Equation (3) (compare Figure 5(A) with Figure 5(B)) whereupon we establish that ${\sigma}_{fit}^{+}$, the standard deviation associated with the distribution of ${x}^{+}$ via the Gaussian approximation, was proportional to $\sqrt{{\mu}_{fit}^{+}}$ (Figure 5(C)) for both observed (red data) and calculated (blue data) ${x}^{+}$ with a proportionality constant numerically similar to A (=0.735 ± 0.0543; ±ASE) alluded to above.

The equality in Equation (11) is also visually confirmed by the results shown in Figure 6 where one can see that all values of $\stackrel{\xaf}{\Delta}$ closely follow the linear expression $\stackrel{\xaf}{\Delta}=A\cdot X$ (for $X={\sigma}_{\stackrel{\xaf}{x}}\xf7\mu $ or $\sqrt[-2]{n\cdot \mu}$; A = 0.781 ± 0.0107; ±ASE) showing that

$\frac{\partial \stackrel{\xaf}{\Delta}}{\partial \left[\sqrt[-2]{n\cdot \mu}\right]}=\frac{\partial \stackrel{\xaf}{\Delta}}{\partial \left[{\sigma}_{\stackrel{\xaf}{x}}\xf7\mu \right]}$.

Since the combined data in Figure 6 are linear with a near-zero intercept (−0.0168 ± 0.00443), then

$\frac{\stackrel{\xaf}{\Delta}}{\sqrt[-2]{n\cdot \mu}}=\frac{\stackrel{\xaf}{\Delta}}{{\sigma}_{\stackrel{\xaf}{x}}\xf7\mu}$

therefore cross-multiplying gives

$\stackrel{\xaf}{\Delta}\cdot {\sigma}_{\stackrel{\xaf}{x}}\xf7\mu =\stackrel{\xaf}{\Delta}\cdot \sqrt{n\cdot \mu}$

and dividing both sides by $\stackrel{\xaf}{\Delta}$ produces the equality

${\sigma}_{\stackrel{\xaf}{x}}\xf7\mu =\sqrt[-2]{n\cdot \mu}$.

All sampling error-related findings are summarized in Figure 7.

3.2. Demonstration That $A=\partial {s}_{{\Delta}_{j}}/\partial \stackrel{\xaf}{\Delta}={C}_{V}\left[{\Delta}_{j}\right]$

Lastly, all these assertions are substantiated by the observation (Figure 8) that the standard deviations associated with all our sampling error measurements ( ${s}_{{\Delta}_{j}}$ ) change linearly as a function of the 4 (P-I, P-II, MPN, Gaussian) sets of $\stackrel{\xaf}{\Delta}$ data with an average slope (i.e., average of the 4 $\partial {s}_{{\Delta}_{j}}/\partial \stackrel{\xaf}{\Delta}$ values = 0.716 ± 0.0739) equivalent to the various values for A in Figures 1-3, Figure 5 and Figure 6. In fact, the slope in Figure 8 defines the coefficient of variation in $\stackrel{\xaf}{\Delta}$ ( ${C}_{V}\left[{\Delta}_{j}\right]$ ) and, if equal to A, then

$\frac{\partial {s}_{{\Delta}_{j}}}{\partial \stackrel{\xaf}{\Delta}}=\frac{\partial \stackrel{\xaf}{\Delta}}{\partial X}$ (12)

where X = either $\sqrt[-2]{n\cdot \mu}$ or ${\sigma}_{\stackrel{\xaf}{x}}\xf7\mu $. Since ${s}_{{\Delta}_{j}}$ in Figure 8 and $\stackrel{\xaf}{\Delta}$ in Figure 6 are linear functions with a near zero intercept then, assuming Equation (12) is true,

$\frac{{s}_{{\Delta}_{j}}}{\stackrel{\xaf}{\Delta}}=\frac{\stackrel{\xaf}{\Delta}}{X}$.

Substituting $\stackrel{\xaf}{\Delta}$ with $A\cdot X$

Figure 7. Summary of curve-fitting results associated with each PDF and method for calculating empirical stochastic sampling error ( $\stackrel{\xaf}{\Delta}$ ). Each constant of proportionality A is presented ± ASE. For binomial data (MPN) $\mu =V\cdot \delta $ (the population average number of entities in V) and $n\cdot {P}^{+}={\mu}^{+}$ (the population average number of positive responses out of n observations).

$\frac{{s}_{{\Delta}_{j}}}{A\cdot X}=\frac{A\cdot X}{X}$

$\frac{{s}_{{\Delta}_{j}}}{X}={A}^{2}$

${s}_{{\Delta}_{j}}={A}^{2}\cdot X=A\left(A\cdot X\right)=A\cdot \stackrel{\xaf}{\Delta}$

and therefore

$\frac{{s}_{{\Delta}_{j}}}{\stackrel{\xaf}{\Delta}}={C}_{V}\left[{\Delta}_{j}\right]=A$

The above equality establishes that the coefficient of variation associated with $\stackrel{\xaf}{\Delta}$ ( ${C}_{V}\left[{\Delta}_{j}\right]$ ) is equivalent to the proportionality constant A seen in Figures 1-3 and Figure 6. Thus sampling errors can be estimated from the relationship $\stackrel{\xaf}{\Delta}={C}_{V}\left[{\Delta}_{j}\right]\times {C}_{V}\left[\stackrel{\xaf}{x}\right]$ whereupon ${C}_{V}\left[{\Delta}_{j}\right]~0.75$ for all PDFs we have tested.

3.3. Minimized Errors Associated with a Well-Sampled Food Microbiome via Most Probable Composition [7] ^{ }

Based upon these results, the estimation of ${C}_{V}\left[\stackrel{\xaf}{x}\right]$ (i.e., ${s}_{\stackrel{\xaf}{x}}\xf7\stackrel{\xaf}{x}$ ) should be germane in determining if data have been appropriately sampled. Figure 9 illustrates that all stochastic errors associated with native aerobic bacteria surviving

Figure 8. (A)-(D): Dependency of the standard deviation (plotted ± P = 0.05 confidence limits) derived from each experimental ${\Delta}_{j}$ array ( $Y={s}_{{\Delta}_{j}}$; $j=1,2,\cdots ,25$ ) on their averages ( $X=\stackrel{\xaf}{\Delta}$ ): Figure 8(A) = P-I data (Spearman’s coefficient of rank correlation: [22] ${\rho}_{S}=0.996$; $P\ll {10}^{-3}$ ); Figure 8(B) = P-II data ( ${\rho}_{S}=0.988$; $P\ll {10}^{-3}$ ); Figure 8(C) = MPN data ( ${\rho}_{S}=0.979$; $P\ll {10}^{-3}$ ); Figure 8(D) = Gaussian data ( ${\rho}_{S}=0.994$; $P\ll {10}^{-3}$ ). The average slopes associated with these 4 relationship = 0.716 ± 0.0739 (± s). All points (25× $\stackrel{\xaf}{\Delta}$ per set) from (A) through (D) are combined in the bottom-most figure ( $\text{d}{s}_{{\Delta}_{j}}/\text{d}\stackrel{\xaf}{\Delta}=0.661\pm 0.0186$; ± ASE). The value $\text{d}{s}_{{\Delta}_{j}}/\text{d}\stackrel{\xaf}{\Delta}$ is equivalent to an experimental coefficient of variation for $\stackrel{\xaf}{\Delta}={C}_{V}\left[{\Delta}_{j}\right]$.

on commercially available, frozen vegetables were sufficiently sampled using an n = 16 - 18 inasmuch as the
${C}_{V}\left[\stackrel{\xaf}{x}\right]$ -values associated with the normalized colony counts (CFU g^{−1} averaged across all
$\mathcal{l}$ dilutions =
${\stackrel{\xaf}{x}}_{\mathcal{l}}$ ÷ 0.007 mL per drop ÷
${0.5}^{\mathcal{l}}$ dilution factor × 57.2 mL total original sample volume ÷ 28.6 g total frozen vegetable mass) were appropriately small (ranging between ca. 2% to 4%). In a

Figure 9. Estimation of the stochastic sampling errors ( ${\stackrel{\xaf}{x}}_{\mathcal{l}}\xf7{\stackrel{\xaf}{x}}_{\mathcal{l}-1}$ ~ calculated dilution factors; $s~\sqrt{{\stackrel{\xaf}{x}}_{\mathcal{l}}}$; ${C}_{V}\left[\stackrel{\xaf}{x}\right]$ for all counts~4% across all dilutions ℓ) associated with a well-sampled [15] (n = 16 - 18) Poisson population (native bacteria on frozen vegetables: 28.6 grams rinsed with 57.2 mL PBS + Tween 20). All the colonies in ℓ = 2 (Control & grown at 30˚C = 55 colonies; Hollow Fiber Concentrated & grown at 30˚C = 49 colonies; Control & grown at 37˚C = 41 colonies; Hollow Fiber Concentrated & grown at 37˚C = 41 colonies) were collected and identified using 16S rDNA Sanger sequencing (EubA and EubB primers) as described previously [7]. Bacterial compositions were nearly identical for all samplings and treatment combinations.

similar vein, it is pertinent that the observed (s) and calculated ( $\sqrt{{\stackrel{\xaf}{x}}_{\mathcal{l}}}$ ) standard deviations associated with the counts per drop were equivalent since the average deviation ( $\left|s-\sqrt{{\stackrel{\xaf}{x}}_{\mathcal{l}}}\right|$ ) from ideality varied only 15.7% ± 3.54% ( $\pm {s}_{\stackrel{\xaf}{x}}$ ). Lastly it is also significant that the dilution factors calculated from the ratios of average plate counts ( ${\stackrel{\xaf}{x}}_{\mathcal{l}}\xf7{\stackrel{\xaf}{x}}_{\mathcal{l}-1}$ ) were very close to ½ (average 0.523 ± 0.0172) which also argues for a minimized $\Delta $.

Across the 4 observational sets (I, II, III, and IV) depicted in Figure 9, the total number of collected colonies (from $\mathcal{l}=2$ ) was 55 (n = 16), 49 (n = 16), 42 (n = 17), and 41 (n = 18), respectively. Bacteria identifications for each of these colonies were based upon rDNA sequence matching 1200 - 1400 basepair contigs searching against NCBI’s GenBank database. The rRNA “gene” sequencing results for the 2 major isolates (making up 88.3% ± 3.28% of the total sampled colonies) show that the 4 sets of observed bacterial compositions were nearly identical (43.6% ± 8.05% Luconostoc and 44.6% ± 13.3% Lactococcus; ±s) [23]. The remainder of the colonies was mainly Acinetobacter (3.74% ± 3.34%) and Streptococcus (4.17% ± 2.75%) with small amounts of diverse isolates (e.g., Staphylococcus, Arthrobacter, Sphingobacterium, Enterococcus, Kocuria, Raoultella, and Bacillus: averaging 1.49% ± 1.09% each). Such variability is expected for the relatively rare isolates (≤4%) due to errors associated with random sampling. The two major species sampled were relatively repeatable because of their abundance, adequate sampling, and very little treatment effect. The minor constituents would have to have been sampled 2.77 ± 0.647-fold more (n > 44) for an equivalent accuracy to the Luconostoc and Lactococcus fractions since the requisite number of samplings for the low count fractions, above, is proportional to the inverse cube root [5] [16] of the number of counts per sampled volume (~ $\sqrt[3]{{\stackrel{\xaf}{x}}_{major}}\xf7\sqrt[3]{{\stackrel{\xaf}{x}}_{minor}}$ ).

4. Summary

We have performed analyses associated with empirical stochastic sampling errors linked to data generated from 3 common probability density functions. We have used these to describe the limiting behavior of $\Delta $ by generating models which suggest a generalized, and facile, mathematical solution. Based upon all our experiments, the common algebraic solution, regardless of parent distribution, is that experimental sampling errors are proportional to ${\sigma}_{\stackrel{\xaf}{x}}\xf7\mu $. This generalized relationship is intuitively reasonable inasmuch as this is the ${C}_{V}$ for any population of sample means ( ${C}_{V}\left[\stackrel{\xaf}{x}\right]$ ) and describes how closely $\stackrel{\xaf}{x}$ values approach μ as n increases. The proportionality constant for all these findings was found to be mathematically related to ${C}_{V}[{\Delta}_{j}]$ or $\partial {s}_{{\Delta}_{j}}/\partial \stackrel{\xaf}{\Delta}$, which is the coefficient of variation associated with the error measurement itself. Lastly, using estimates of these sampling-associated errors ( ${C}_{V}\left[\stackrel{\xaf}{x}\right]~{s}_{\stackrel{\xaf}{x}}\xf7\stackrel{\xaf}{x}$ ), we show that when a test microbiome was sufficiently sampled, several measures of stochastic sampling error were reasonably small for both counting and DNA sequence-based results.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Irwin, P.L., He, Y. and Chen, C.-Y. (2019) Deconvolution of the Error Associated with Random Sampling. Advances in Pure Mathematics, 9, 205-227. https://doi.org/10.4236/apm.2019.93010

References

- 1. Halvorson, H.O. and Ziegler, N.R. (1933) Application of Statistics to Problems in Bacteriology. I. A Means of Determining Bacterial Population by the Dilution Method. Journal of Bacteriology, 25, 101-121.
- 2. Kubitschek, H.E. (1990) Cell Volume Increase in Escherichia coli after Shifts to Richer Media. Journal of Bacteriology, 172, 94-101. https://doi.org/10.1128/jb.172.1.94-101.1990
- 3. Barkworth, H. and Irwin, J.O. (1938) Distribution of Coliform Organisms in Milk and the Accuracy of the Presumptive Coliform Test. Journal of Hygiene, 38, 446-457. https://doi.org/10.1017/S0022172400011311
- 4. Best, D.J. (1990) Optimal Determination of Most Probable Numbers. International Journal of Food Microbiology, 11, 159-166. https://doi.org/10.1016/0168-1605(90)90051-6
- 5. Irwin, P., Reed, S., Nguyen, L., Brewster, J. and He, Y. (2013) Non-Stochastic Sampling Error in Quantal Analyses for Campylobacter Species on Poultry Products. Analytical and Bioanalytical Chemistry, 405, 2353-2369. https://doi.org/10.1007/s00216-012-6659-2
- 6. Irwin, P., Gehring, A., Tu, S.-I., Brewster, J., Fanelli, J. and Ehrenfeld, E. (2000) Minimum Detectable Level of Salmonellae Using a Binomial-Based Ice Nucleation Detection Assay. Journal of AOAC International, 83, 1087-1095.
- 7. Irwin, P.L., Nguyen, L.-H.T., Chen, C.-Y. and Paoli, G. (2008) Binding of Nontarget Microorganisms from Food Washes to Anti-Salmonella and anti-E. coli O157 Immunomagnetic Beads: Most Probable Composition of Background Eubacteria. Analytical and Bioanalytical Chemistry, 391, 525-536. https://doi.org/10.1007/s00216-008-1959-2
- 8. de St. Groth, S.F. (1982) The Evaluation of Limiting Dilution Assays. Journal of Immunological Methods, 49, R11-R23. https://doi.org/10.1016/0022-1759(82)90269-1
- 9. Bevington, P.R. and Robinson, D.K. (1992) Data Reduction and Error Analysis for the Physical Sciences. McGraw-Hill, Boston, 17-23 and 41-43.
- 10. Irwin, P., Fortis, L. and Tu, S.-I. (2001) A Simple Maximum Probability Resolution Algorithm for Most Probable Number Analysis Using Microsoft Excel. Journal of Rapid Methods and Automation in Microbiology, 9, 33-51. https://doi.org/10.1111/j.1745-4581.2001.tb00226.x
- 11. Gosset, W.S. (1907) “Student” on the Error of Counting with a Haemocytometer. Biometrika, 5, 351-360. https://doi.org/10.1093/biomet/5.3.351
- 12. Fisher, R.A. (1922) On the Mathematical Foundations of Theoretical Statistics. Philosophical Transactions of the Royal Society, London, Series A, 222, 309-368. https://doi.org/10.1098/rsta.1922.0009
- 13. Irwin, P.L., Nguyen, L.-H.T., Paoli, G.C. and Chen, C.-Y. (2010) Evidence for a Bimodal Distribution of Escherichia coli Doubling Times below a Threshold Initial Cell Concentration. BMC Microbiology, 10, 207.
- 14. Chen, C.-Y., Nace, G.W. and Irwin, P.L. (2003) A 6×6 Drop Plate Method for Simultaneous Colony Counting and MPN Enumeration of Campylobacter jejuni, Listeria monocytogenes, and Escherichia coli. Journal of Microbiological Methods, 55, 475-479. https://doi.org/10.1016/S0167-7012(03)00194-5
- 15. Irwin, P.L., Nguyen, L.-H.T. and Chen, C.-Y. (2008) Binding of Nontarget Microorganisms from Food Washes to Anti-Salmonella and Anti-E. coli O157 Immunomagnetic Beads: Minimizing the Errors of Random Sampling in Extreme Dilute Systems. Analytical and Bioanalytical Chemistry, 391, 515-524. https://doi.org/10.1007/s00216-008-1961-8
- 16. Irwin, P.L., Nguyen, L.-H.T. and Chen, C.-Y. (2010) The Relationship between Purely Stochastic Sampling Error and the Number of Technical Replicates Used to Estimate Concentration at an Extreme Dilution. Analytical and Bioanalytical Chemistry, 398, 895-903. https://doi.org/10.1007/s00216-010-3967-2
- 17. Trotter, H.F. (1959) An Elementary Proof of the Central Limit Theorem. Archiv der Mathematik, 10, 226-234. https://doi.org/10.1007/BF01240790
- 18. Hartley, H.O. (1961) The Modified Gauss-Newton Method for Fitting of Non-Linear Regression Functions by Least Squares. Technometrics, 3, 269-280. https://doi.org/10.1080/00401706.1961.10489945
- 19. Irwin, P.L., Damert, W.C. and Doner, L.W. (1994) Curve Fitting in Nuclear Magnetic Resonance Spectroscopy: Illustrative Examples Using a Spreadsheet and Microcomputer. Concepts in Magnetic Resonance, 6, 57-67. https://doi.org/10.1002/cmr.1820060105
- 20. Beers, Y. (1957) Introduction to the Theory of Error. Addison-Wesley Publishing Company, Inc., Reading, 29-30.
- 21. Salter, C. (2000) Error Analysis Using the Variance-Covariance Matrix. Journal of Chemical Education, 77, 1239-1243. https://doi.org/10.1021/ed077p1239
- 22. Steel, R.G.D. and Torrie, J.H.D. (1960) Principles and Procedures of Statistics. McGraw-Hill, New York, 409.
- 23. Irwin, P., Capobianco, J., Nguyen, L., He, Y., Gehring, M., Gehring, A. and Chen, C.-Y. (2019) Bacterial Cell Recovery after Hollow Fiber Microfiltration Sample Concentration and Washing: Most Probable Bacterial Composition in Frozen Vegetables.

Definitions

Indices = i ( $=1,2,\cdots ,n$ ) observations per experiment; j ( $=1,2,\cdots ,J=100$ ) experiments with n observations each; k ( $=1,2,\cdots ,K$ ) rows of X-Y values; $\mathcal{l}$ ( $=1,2,\cdots ,L$ ) dilutions; m ( $=1,2,\cdots ,M$ ) iterations; p ( $=1,2,\cdots ,P$ ) parameters

${\Delta}_{j}$ = j^{ }^{th} experimental measure of sampling error out of J = 100 experiments: Equations (7)-(10).

$\stackrel{\xaf}{\Delta}$ = average sampling error in J = 100 observations of ${\Delta}_{j}$

A = proportionality constant associated with $\stackrel{\xaf}{\Delta}$ curve-fitting to n, μ (or σ)

${s}_{{\Delta}_{j}}$ = standard deviation associated with ${\Delta}_{j}$ measurement; for this work there are 25 ( $n\times \mu $ or $n\times \sigma $ for the Gaussian populations) such ${s}_{{\Delta}_{j}}$ for each PDF type (2 types of Poisson, MPN or binomial, Gaussian)

$\mu $ = for either Poisson PDF or MPN assays ( $\mu =V\cdot \delta $ ), the population average number of biological entities, or other analytes, per test; for Gaussian PDF, the population’s average of any real-valued, randomly changing variable

V = the sample volume to be tested

${V}_{e}$ = volume of the biological entity, or other analyte, being tested

$\delta $ = concentration of the biological entity (count ÷ V) or other analyte

${\mu}^{+}$ = population average number of positive growth responses (MPN) out of n observations; ${\mu}^{+}=n\cdot {P}^{+}$

${\sigma}^{+}$ = the standard deviation associated with the probability density of ${x}^{+}$; the Gaussian approximations for ${\sigma}^{+}$ are plotted in Figure 5(C) as a function of Gaussian best fits for ${\mu}^{+}$

${P}^{-}$ = probability that ${V}_{e}$ will NOT contain the biological entity, or other analyte, being tested

${P}^{+}$ = probability that ${V}_{e}$ will contain the biological entity, or other analyte, being tested; ${P}^{+}=1-{P}^{-}$; Equation (1)

${\partial}_{X}f\left[X\right]$ = $\partial f\left[X\right]/\partial X$

${x}_{ij}$ = for Poisson populations, the i^{ }^{th} observation’s number of counts per tested volume, surface area, etc. for each j^{ }^{th} experiment; for Gaussian populations, any real-valued, randomly changing variable

${\stackrel{\xaf}{x}}_{j}$ = $\frac{1}{n}\cdot {\displaystyle {\sum}_{i=1}^{n}{x}_{ij}}$

${x}_{j}^{+}$ = j^{ }^{th} experiment’s number of positive growth responses out of n observations;
${x}_{j}^{+}={\displaystyle {\sum}_{i=1}^{n}{\theta}_{ij}}$ where θ = 1 (positive) or 0 (negative)

${\stackrel{\xaf}{x}}_{j}^{+}$ = j^{ }^{th} experiment’s number of positive counts in V volume;
${\stackrel{\xaf}{x}}_{j}^{+}=\mathrm{ln}\left[n\xf7\left(n-{x}_{j}^{+}\right)\right]$; the x-bar symbol is used here because this relations contains a parameter,
${x}_{j}^{+}$, which is the result of a summation across all
${\theta}_{ij}$; it just isn’t normalized to n

n = number of technical replicates in each j^{ }^{th} experiment; for MPN, number of observations each of volume V; for Poisson populations we have found [15] that the minimal number of replicates per assay was
${n}_{calc}={n}_{\mu \to 1}\cdot \sqrt[-3]{\mu}$ where
${n}_{\mu \to 1}$ is the number of replicates necessary to enumerate a population with μ = 1

$\sigma $ = population standard deviation associated with μ

${\sigma}_{\stackrel{\xaf}{x}}$ = standard deviation of a population of sample means ( $\stackrel{\xaf}{x}$ ); the formula for the ${\sigma}_{\stackrel{\xaf}{x}}$ statistic can be derived from the propagation of errors method [20] without covariance

$\begin{array}{c}{\sigma}_{\stackrel{\xaf}{x}}=\sqrt{{\left(\frac{\partial \stackrel{\xaf}{x}}{\partial {x}_{1}}\right)}^{2}{\sigma}_{{x}_{1}}^{2}+{\left(\frac{\partial \stackrel{\xaf}{x}}{\partial {x}_{2}}\right)}^{2}{\sigma}_{{x}_{2}}^{2}+\cdots +{\left(\frac{\partial \stackrel{\xaf}{x}}{\partial {x}_{n}}\right)}^{2}{\sigma}_{{x}_{n}}^{2}}\\ =\sqrt{n\frac{{\sigma}^{2}}{{n}^{2}}}=\frac{\sigma}{\sqrt{n}}\end{array}$

since

$\frac{\partial \stackrel{\xaf}{x}}{\partial {x}_{1}}=\frac{\partial \stackrel{\xaf}{x}}{\partial {x}_{2}}=\cdots =\frac{\partial \stackrel{\xaf}{x}}{\partial {x}_{n}}=\frac{1}{n}$

and

${\sigma}_{{x}_{\text{1}}}^{2}={\sigma}_{{x}_{\text{2}}}^{2}=\cdots ={\sigma}_{{x}_{n}}^{2}={\sigma}^{2}$.

${s}_{j}$ = any j^{th} experiment’s estimation of population standard deviation

${s}_{\stackrel{\xaf}{x}}$ = estimation of ${\sigma}_{\stackrel{\xaf}{x}}$ from a limited number of ${\stackrel{\xaf}{x}}_{j}$; ${s}_{\stackrel{\xaf}{x}}={s}_{j}\xf7\sqrt{n}$

${C}_{V}\left[\stackrel{\xaf}{x}\right]$ = coefficient of variation for a population of means; ${C}_{V}\left[\stackrel{\xaf}{x}\right]={\sigma}_{\stackrel{\xaf}{x}}\xf7{\mu}_{\stackrel{\xaf}{x}}={\sigma}_{\stackrel{\xaf}{x}}\xf7\mu $ estimated as ${s}_{\stackrel{\xaf}{x}}\xf7\stackrel{\xaf}{x}$

${C}_{V}\left[x\right]$ = coefficient of variation for any set of observations x; ${C}_{V}\left[x\right]=\frac{\sigma}{\mu}$ estimated as $\frac{s}{\stackrel{\xaf}{x}}$

${C}_{V}\left[{\Delta}_{j}\right]$ = $\partial {s}_{{\Delta}_{j}}/\partial \stackrel{\xaf}{\Delta}~{s}_{{\Delta}_{j}}\xf7\stackrel{\xaf}{\Delta}$ if the ${s}_{{\Delta}_{j}}$ vs. $\stackrel{\xaf}{\Delta}$ intercept ~ 0

CLT = central limit theorem: the mean ( ${\mu}_{\stackrel{\xaf}{x}}$ ) of a population of observed means ( $\stackrel{\xaf}{x}$ ) will be approximately equal to the mean of the sampled population (μ) and the standard deviation of this population of means will be approximately equal to ${\sigma}_{\stackrel{\xaf}{x}}$; Equation (5) with $x=\stackrel{\xaf}{x}$, $\mu ={\mu}_{\stackrel{\xaf}{x}}=\mu $, and $\sigma ={\sigma}_{\stackrel{\xaf}{x}}$

PDF = probability density function or probability distribution function

${P}_{b}$ = binomial PDF: Equation (3)

${P}_{P}$ = Poisson PDF: Equation (4)

${P}_{G}$ = Gaussian PDF: Equation (5)

CL = confidence limit = t-statistic × ${s}_{{f}_{k}}$ = $t\cdot {s}_{{f}_{k}}$

ASE = asymptotic standard error [19] ; for any fitting parameter $\omega $,

$ASE={s}_{\omega}=\sqrt{{s}_{Y}^{2}\cdot {\left[{Z}^{\text{T}}Z\right]}_{\omega \omega}^{-1}}$; ${s}_{Y}^{2}$ = residual sum of squares ÷ (K − M) where M =

the number of fitting parameters ${\pi}_{p}$ ( $p=1,2,\cdots ,P$ )

${s}_{{f}_{k}}$ = k^{th} row standard error of fitting function f_{k};
${s}_{{f}_{k}}=\sqrt{{s}_{Y}^{2}\left({Z}_{k}{\left[{Z}^{\text{T}}Z\right]}^{-1}{Z}_{k}^{\text{T}}\right)}$

$Z$ = partial first derivative matrix of ${f}_{k}$ with respect to associated fitting parameters ${\pi}_{1},{\pi}_{2},\cdots ,{\pi}_{P}$

${Z}^{\text{T}}$ = transposition of $Z$

${Z}_{k}$ = $\left[\begin{array}{cc}{\partial}_{{\pi}_{1}}{f}_{k}& {\partial}_{{\pi}_{2}}{f}_{k}\end{array}\right]$ for ${f}_{k}=f\left[{X}_{k};{\pi}_{p}\right]$