red estimation errors in Figure 4 were estimated by assuming the smooth curve in Figure 3 is the true slowly-varying mean, say μt, generating Poisson observations having mean μt, and using either the local mean estimated just prior to each profile to estimate μt. Figure 5 is similar to Figure 4, but shows that a running mean (using as the running mean, where is the local mean) has smaller estimation error than the local mean.

In practice, T is estimated using an estimate of the background and either Gaussian or Poisson probabilities to estimate the FAP. Estimation error in T can be viewed in our context as leading to estimation error in the FAP, but provided the running mean approach is used, that estimation error in the FAP is quite small.

Figure 2. Alarm threshold k for 0.001 false alarm rate as a function of the duration of the background averaging period for 3 background means (2, 8, and 40 cps).

Figure 3. Example of real neutron average counts over approximately 48 hours.

Figure 4. The mean squared estimation error as a function of the background averaging time in 0.1 seconds.

Figure 5. The mean squared estimation error as a function of thebackground averaging time in 0.1 seconds using either a running mean or the local mean for the slowly varying real data shown in Figure 4.

4. Discussion and Summary

An important issue in fielding RPMs is that an RPM vendor must be selected among viable candidates. This paper supports experiments aimed to test RPMs prior to vendor selection. One of the simplest tests of an RPM is the “≥15 alarms in 20 repeats” rule, which arose from criterion A that requires at least 95% confidence that Xi ≥ T, where T is an alarm threshold, with at least 0.50 probability. Note that even if a test has ≥15 alarms in 20 repeats, we cannot claim that with probability 0.95, future Xi ≥ T with at least 0.50 probability. That’s why the term “confidence” is used, and it arises from defensible use of binomial probabilities. In fact, it is possible that the true P(Xi ≥ T) < 0.50 for ALL vehicles that pass the ≥15 alarms in 20 repeats rule (although the pass rate would be low).

We have provided a non-parametric and a parametric option for both the Gaussian and the Poisson models for criterion A. The numerical values required to implement these options are summarized in Sections 2.3 and Sections 2.5 and were estimated using a simple optimization function freely available in R, with example R code given in Appendix 2.

Typically the alarm threshold T is estimated from the background data, so we included an assessment of the actual versus the nominal false alarm probability as a function of the background averaging period.

We also added a practical requirement involving the RPMs detected count rate µ such that that vehicles pass the test with probability at least 0.95. We did not consider the signal shape during the profile, because either the vehicle is assumed stationary, or the total neutron counts over the entire profile are used. References [4,5, 9,10] provide analyses appropriate for alarm rules that do consider signal shape during the profile.

5. Acknowledgements

We acknowledge the Department of Homeland Security for funding the production of this material under DOE Contract Number DE-AC52-06NA25396 for the management and operation of Los Alamos National Laboratory.

REFERENCES

  1. B. Geelhood, J. Ely, R. Hansen, R. Kouzes, J. Schweppe and R. Warner, “Overview of Portal Monitoring at Border Crossings,” 2003 IEEE Nuclear Science Symposium Conference Record, Portland, 19-25 October 2003, pp. 513- 517. doi:10.1109/NSSMIC.2003.1352095
  2. J. Ely, R. Kouzes, J. Schweppe, E. Siciliano, D. Strachan and D. Weier, “The Use Of Energy Windowing to Discriminate SNM from NORM in Radiation Portal Monitors,” Nuclear Instruments and Methods in Physics Research A, Vol. 560, No. 2, 2005, pp. 373-387. doi:10.1016/j.nima.2006.01.053
  3. T. Burr, J. Gattiker, M. Mullen and G. Tompkins, “Statistical Evaluation of the Impact of Background Suppression on the Sensitivity of Passive Radiation Detectors,” Springer, New York, 2006.
  4. T. Burr, J. Gattiker, K. Myers and G. Tompkins, “Alarm Criteria in Radiation Portal Monitoring,” Applied Radiation and Isotopes, Vol. 65, No. 5, 2007, pp. 569-580. doi:10.1016/j.apradiso.2006.11.010
  5. T. Burr and M. Hamada, “The Performance of Neutron Alarm Rules in Radiation Portal Monitoring,” in revision for Technometrics, 2012.
  6. R. Kouzes, E. Siciliano, J. Ely, P. Keller and R. McConn, “Passive Neutron Detection For Interdiction Of Nuclear Material At Borders,” Nuclear Instruments and Methods in Physics Research A, 584, No. 2-3, 2008, pp. 383-400. doi:10.1016/j.nima.2007.10.026
  7. D. Young, “Tolerance: An R Package for Estimating Tolerance Intervals,” Journal of Statistical Software, Vol. 36, No. 5, 2010. www.jstatsoft.org
  8. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, 2004. http://www.R-project.org
  9. S. Robinson, S. Bender, E. Flumerfelt, C. LoPresti, M. Woodring, “Time Series Evaluation of Radiation Portal Monitor Data for Point Source Detection,” IEEE Transactions on Nuclear Science, Vol. 56, No. 6, 2009, pp. 3688-3693. doi:10.1109/TNS.2009.2034372
  10. T. Schroettner, P. Kindl and G. Presle, “Enhancing Sensitivity of Portal Monitoring at Varying Transit Speed,” Applied Radiation and Isotopes, Vol. 67, No. 10, 2009, pp. 1878-1886. doi:10.1016/j.apradiso.2009.04.015

Appendix 1. The Distribution of the Detected Counts D.

Assume the true counts C have a Poisson distribution with mean µ, so C ~ Poisson(m). Given a realization C, the detected counts D have a binomial distribution with mean, where the detector efficiency e < 1, so ~ binomial(C, e). This appendix shows that the unconditional distribution of the detected counts D is Poisson(me).

Therefore,

Appendix 2. R Code to Use Simulation in Repeated Calls to the Optimize Function

f2 = function(k = 2, mu = 500, n = 20, sig = 30, lprob = .05, nsim = 1000, do.normal = FALSE){

temp2 = mu

 for(isim in 1:nsim) {

 x = rpois(n = n, lambda=mu)

# assume Poisson unless # do.normal == TRUE temp1[isim] = mean(x) - k*mean(x)^.5 if(do.normal) {x = rnorm(n = n, mean = mu, sd = sig); temp1[isim] = mean(x) - k*var(x)^.5}

}

   abs(1-lprob-mean(temp1 <= temp2))

}

 

Example result:

optimize(f2, interval = c(0.1,1), nsim = 10^5, mu = 2, n = 20, maximum = FALSE, lprob = 0.05)

$minimum

[1] 0.3648825

Grid search for parametric option

mu.grid = seq(4, 10, length = 10)

k = 0.3866; mu = 2 nsim = 10^5; thresh =  4*mu^.5 + mu grid.save1 = numeric(length(mu.grid))

for(i in 1:length(mu.grid)) {

mutemp = mu.grid[i]

temp2 = thresh

 temp1 = numeric(nsim)

 for(isim in 1:nsim) {

  x = rpois(n=n,lambda=mutemp)

temp1[isim] = mean(x) - k*mean(x)^.5

 }

  grid.save1[i] = mean(temp1 >= thresh)

}

Journal Menu >>