atabase access controls.

4) Exploitation of input control (buffer overflows) to undermine availability and escalate privilege.

• With management approval, an appropriate threat alleviation systems needs to be identified as well as choice of the version of the database.

The tools used in the define phase are:

• Cost of Quality (CQ) where the cost of quality can be split into Cost of Good Quality (CGQ) [14] when a process conforms to certain guidelines, which in context of security, is to follow the best practices in managing security policies.

• Cost of Poor Quality (CPQ) [14] accrued due to nonconformance. A tool commonly used to focus on failures in CPQ is a Pareto Chart [15].

Pareto Chart [15] is used for identifying financial loss due to threats to digital assets denoting CPQ. A Pareto chart highlights the importance of a certain factor among various factors. In case of security, the Pareto chart highlights the importance of loss in revenue correlated to corresponding security vulnerability. A typical Pareto chart for CVSS severity of attacks for PostgreSQL database (Years 2001-2005) is shown in Figure 3. This chart represents the CVSS score of vulnerabilities when being prioritized for system integration. For security management processes, the severity of the rating of a threat is equated to financial dollars and a management data spread should clearly show where priorities lie.

Another way to highlight various aspects of process is by using a SIPOC (supplier, input, process, output, customer) [16] chart which identifies the workflow interaction of any service. For a security policy management process the SIPOC chart, identifies how security policy interacts with a computing service. The SIPOC chart for a security process is shown in Figure 4.

• Supplier of Input—System, Consumer, MaliciousContent Provider, and Environment.

• Inputs—System Inputs, Consumer data input, Environmental input, and Malicious data input.

• Process—Computing Service for Content, Security Policy Directives and Countermeasures for Threats.

• Output—Processed Data and Monitoring Data.

• Customer—Consumer.

4.2. 6σ—MEASURE Phase for Security Policy

Measure phase involves measurement and quantification of risks to digital assets in the service.

• Threat Impact due to software is measured by a system similar to the CVSS score.

• Risk due to hardware which quantifies the level of trust the hardware can provide.

• Risk during operation of the computing service based on the threat model identified during the Define phase.

The CVSS base score consists of:

• Access Vector denoting how the vulnerability is exploited.

• Access Complexity denoting the complexity of the vulnerability once access is gained into the system.

• Authentication which highlights how many authenticcation steps an attacker has to attempt so as to exploit the vulnerability.

• Confidentiality Impact metric which highlights how the vulnerability effects unauthorized data.

• Integrity Impact which denotes the guarantees of trust on content.

• Availability Impact which denotes content accessibility in face of a successful attack.

Figure 3. Pareto chart of CVSS score for threats in PosgreSQL.

Figure 4. Security SIPOC chart of a computing process.

The tools used in the measure phase for Six Sigma are:

• Y = F(X) [6] tool as shown in Figure 5, which identifies malicious input (X) and related output (Y) for various threats identified to be included in the Define Phase. Typically this analysis shows the causality relation of threat vectors and corresponding vulnerability of a computing system. In Figure 5, the threat dataset (X) when processed by the computing system (F) identifies the vulnerability (Y). In this analysis, the Access Vector of CVSS is the threat dataset. Access Complexity and Authentication of the CVSS base score are measured.

• FMEA (Failure Mode and Effects Analysis) [17] tool identifies threat vectors, severity of threats, causes and current inspection methodology to evaluate the risks. Here, the Confidential Impact, Integrity Impact and Availability Impact of the CVSS base score are measured. The vulnerability data obtained from NVD (National Vulnerability Database) [18] of Post greSQL [13] identified in the Define Phase shows the number of threats each year as shown in Figure 6. Another important aspect of policy creation process is to train the people who would deal with the computing system and change the computing logic in any way. The score that affects the quality of the security of a product depends on how well they are trained.

• Process Sigma [19] tool quantifies whether current security policies are capable (Cp, Cpk) to meet identified threats by identifying the process sigma. Cp indicates the capability of existing security policies to counter known and modeled threats. Cpk indicates how effective a security policy in countering actual threats:

1) The important factors here are the consumer specification and operational specification. If the severity threats are quantified within these specifications, then the CVSS Risk score gives the value of risk.

2) This also has a bearing on the customer agreements. Difficult to stage attacks requiring the customer to be an active participant in the attack like hardware attacks will fall beyond the operational specification of a computing service. Hence the customer agreement is drawn to limit liabilities for the computing service provide in such cases. The Cpk value of risk in case of hard to exploit attacks would be low which are then framed into consumer agreements.

• GAGE [6] tool is used to gage repeatability and reproducibility (Gage R & R) of threat identification, and to remove false positives from the approach data is collected.

The Six Sigma Measure phase chart shown in Table 1 indicates the proposed mapping of various tools in Six Sigma to that of a security measures.

Table 1. Measure phase mapping to security management.

Figure 5. Y = F(X) analysis for security.

Figure 6. Vulnerability rate each year of PostgreSQL [18].

4.3. 6σ—ANALYZE Phase for Security Policy

Analyze phase determines the effectiveness of the security policies and threats models already in place. The goals of this phase are:

• Improvement to existing security policies.

• Identification of new threats and thereby changes to the threat model.

The CVSS temporal metrics provides measurements and analysis into:

• Exploitability which measures the techniques of exploits and the availability of code that can used to stage the exploit.

• Remediation Level which deals with the type of fix that is available for a given vulnerability.

• Report Confidence which deals with the existence of the vulnerability and quality of information about the technical details of the vulnerability.

The tools used in the analyze phase are:

• Hypothesis testing [20] on threat data to test efficacy of new security policies creating null hypothesis (H0) or alternate hypothesis (Ha). The alpha risk is still kept at an industrial risk standard of 5% for hypothesis testing. This is also used to test security flags in automated testing tools before deployment. This can be established by measuring the exploitability as defined in the CVSS temporal metric.

• Correlation and Regression to test known threat vectors to identify input output relationships. This part deals with lab based penetration and fuzz testing for software security and quality assurance [21]. The output is generally identified by Pearson coefficient. This can be highlighted by the remediation level and report confidence of the CVSS temporal score.

• Analysis of Variance (ANOVA) [22] is hypothesis testing with variations of input factors. It essentially states the effectiveness of security framework for variations in input and temperature, or input and clock etc.

Elements of the Analyze Phase are:

• Risk Assessment: Based on the available policy and threat models:

1) Decisions can be made on the degree of risk that can be taken.

2) Some policies maybe too expensive to implement and not worth implementing for the product at hand and this assessment of risk quantification helps make business and financial decisions.

3) Usage of the policy and threat models combined with the computing logic determines how people utilize a security system and helps to focus on critical threats and policies. Eventually, it feeds into the risk assessment for any future decision.

• Component Threat Model: The threat model in the analysis phase gives an overview of any modeled threats and the modeling of any new threats.

1) In a computing system built out of various components, a specific threat model for each component exists. For example some components in a computing service may experience network centric threats where as others might experience hardware centric threats.

2) Monitoring is used to analyze effectiveness of the policies so as to discover various correlations between input output data and threats to digital assets.

• Penetration Testing: Simulating and staging an attack on a computing service requires understanding about how a computing service is used. It identifies various input output characteristics based on the component threat model.

Proposed Analyze Phase mapping to security principles is shown in Table 2.

4.4. 6σ—IMPROVE Phase for Security Policy

Improve phase within the context of security policies have to either create new security policies or improve existing security policies. The tools used in the improve phase are:

• Design of Experiments (DOE) [23] is essentially doing ANOVA [22] for the whole system. ANOVA measure in the analyze phase is used to get variations for components of a computing service. In DOE, all variations in a computing service are taken into account to understand the effectiveness of the security framework and recording risk value of a policy to a threat on any digital asset with variations. This needs to be done always after the GAGE measurement is conducted on the threats since it identifies the source of variations due to threats in various operating environments.

Elements of Improve Phase are:

• Security Policy Directive: The security policy directive is the actual policy definitions which are implemented. These definitions are invoked before the actual computing logic is executed. This takes the feedback from a threat profile which was used to create the policies.

Table 2. Analyze phase mapping to security management.

• Security Policy Countermeasures: The countermeasure part of security policy acts on any modeled threat which has been encountered during operation. The effective decision of countermeasures lies with this policy definition.

The Improve Phase mapping to security management is shown in Table 3.

4.5. 6σ—CONTROL Phase for Security Policy

Control phase of security policy highlights the actual control of the computing service with security policies operating in a feedback mode. The tools used in the control phase are:

• Statistical Process Control (SPC) [24] measures the critical characteristics of the process in real-time and generates countermeasures if threats are identified to alleviate them.

• Mistake Proofing [25] also called Poka-Yoke wherein policy definitions are error-proofed so that they cannot be misinterpreted Control Phase mapping to security principles is shown in Table 4.

5. Comparison of the Security Policy Models

A comparison of security policy management between the existing work presented in Section 3 (PFIRES model [3], and the organizational process model [4]) and the proposed Six-Sigma model is presented in Table 5. The various aspects of this comparison are:

• Refining of Security Policies—a security policy management process requires refinement of existing policies in a proactive and reactive manner. The primary objective of the existing models and the presented model is similar and all the models satisfy this requirement.

• Threat Profile—the threat profile on which the security policy is executed is done with an active threat profile in the Six-Sigma model. Due to the causality relationship between security policy and threat as a part of the live computing service, an active threat profile is required to provide continuous monitoring and adaptation of security policy. The existing models in literature do indicate the need for threat modeling but do not propose it to be a part of the active system.

• External Factors—the external factors affecting a computing service is the unknown in any security architecture. Threats that are known and modeled can only be countered by design.

• Feedback—the feedback for the efficacy of a security policy due to changes in threats is addressed implicitly during policy evaluation and design in existing systems. In the Six-Sigma process, the feedback is explicit since we added an explicit threat monitoring system to adapt security policies.

• On the Fly Change—due to compartmentalization of security policies and threat profiles as an explicit part of the computing service, the proposed model can change on the fly as threats evolve. The threat monitoring system also allows us to adapt policies based on monitoring data. In the current models, due to the embedded part of policy in the computing service without explicit separation, on the fly change may be difficult to enact.

Table 3. Improve phase mapping to security management.

Table 4. Control phase mapping to security management.

Table 5. Feature comparison of the security policy models.

• Mathematical Model—the model presented here is based on the causality relationship between threat and security policy. Without having causality relationship, Six-Sigma tools cannot be used for analysis. Thereby, the framework we present in this model is different from others where the mathematical framework is not presented. The models compared against are based on well-known practices or experience whereas the proposed model is based on a mathematical approach.

• Industrial Process Integration—the model presented here integrates security policy management process within industrial processes which facilitates industry goals of risk quantification and assessment. The PFIRES model and Organizational Process model don’t present integration with industrial processes.

6. Conclusions

In this paper, we presented a security policy management process within a Six Sigma framework. Furthermore, we contend that the design of secure computing systems is based on creating adaptive policies and their correlation to threats. We address various challenges in security policy management process including:

• Integration with a known management process thereby reusing tools already existing within an industrial setting.

• Integration of tools with security primitives to facilitate decision making.

• Quantification of risks to digital assets.

REFERENCES

  1. F. B. Schneider, “Enforceable Security Policies,” ACM Transactions on Information and System Security, Vol. 3, No. 1, 2000, pp. 30-50. doi:10.1145/353323.353382
  2. Six Sigma Motorola University, 2011. http://web.archive.org/web/20051106012600/http://www.motorola.com/motorolauniversity.
  3. J. Rees, S. Bandyopadhyay and E. H. Spafford, “PFIRES: A Policy Framework for Information Security,” Communications of the ACM, Vol. 46, No. 7, 2003, pp. 101-106. doi:10.1145/792704.792706
  4. K. J. Knapp, R. F. Morris Jr., T. E. Marshall and T. A. Byrd, “Information Security Policy: An Organizational- Level Process Model”, Computers and Security, Vol. 28, No. 7, 2009, pp. 493-508. doi:10.1016/j.cose.2009.07.001
  5. W. Scacchi, “Process Models in Software Engineering,” Encyclopedia of Software Engineering, 2nd Edition, John Wiley and Sons, Inc., New York, 2001.
  6. R. Shankar, “Process Improvement Using Six Sigma: A DMAIC Guide,” ASQ Quality Press, Milwaukee, 2009.
  7. D. N. Card, “Myths and Strategies of Defect Causal Analysis”, Proceedings of Pacific Northwest Software Quality Conference, Portland, 18-19 October 2006.
  8. G. Zanin and L. V. Mancini, “Towards a Formal Model for Security Policies Specification and Validation in the SELinux System,” Proceedings of the Ninth ACM Symposium on Access Control Models and Technologies (ACMAT’04), New York, 2-4 June 2004, pp. 136-145.
  9. S. Preda, F. Cuppens, N. Cuppens-Boulahia, J. G. Alfaro, L. Toutain and Y. Elrakaiby, “Semantic Context Aware Security Policy Deployment,” Proceedings of the 4th International Symposium on Information, Computer, and Communications Security (ASIACCS’09), Sydney, 10-12 March 2009, pp. 251-261.
  10. D. Xu and K. E. Nygard, “Threat-Driven Modeling and Verification of Secure Software Using Aspect-Oriented Petri Nets,” IEEE Transactions on Software Engineering, Vol. 32, No. 4, 2006, pp. 265-278. doi:10.1109/TSE.2006.40
  11. “A Complete Guide to the Common Vulnerability Scoring System Version 2.0.,” 2011. http://www.first.org/cvss/cvss-guide.html.
  12. “CMLA Service Provider Agreement,” 2011. http://www.cm-la.com/documents/CMLA%20Service%20Provider%20Agreement%20V1.42%2020110712%20final.pdf.
  13. PostgreSQL, 2011. http://www.postgresql.org/
  14. V. E. Sower, R. Quarles and E. Broussard, “Cost of Quality Usage and Its Relationship to Quality System Maturity,” International Journal of Quality & Reliability Management, Vol. 24, No. 2, 2007, pp. 121-140. doi:10.1108/02656710710722257
  15. M. Lazzaroni, “A Tool for Quality Controls in Industrial Process,” IEEE Instrumentation and Measurement Technology Conference, Suntec City, 3-6 March 2009. doi:10.1109/IMTC.2009.5168418
  16. H. De Koning and J. De Mast, “ASQ: The CTQ Flowdown as a Conceptual Model of Project Objectives,” Quality Management Journal, Vol. 14, No. 2, 2007, pp. 19-28.
  17. L. Grunske, R. Colvin and K. Winter, “Probabilistic ModelChecking Support for FMEA,” 4th International Conference on the Quantitative Evaluation of Systems (QEST 2007), Edinburgh, 16-19 September 2007, pp. 119-128.
  18. National Vulnerability Database (NVD), 2011. http://nvd.nist.gov/home.cfm
  19. H. P. Barringer, “Process Reliability and Six Sigma,” National Manufacturing Week Conference, Chicago, 13-16 March 2000.
  20. C. Hsieh, B. Lin and B. Manduca, “Information Technology and Six Sigma Implementation,” Journal of Computer Information Systems, Vol. 47, No. 4, 2007, pp. 1-10.
  21. A. Takanen, J. DeMott and C. Miller, “Fuzzing for Software Security Testing and Quality Assurance,” 1st Edition, Artech House, London, 2008.
  22. “The ANOVA Procedure, SAS/STAT(R) 9.2 User’s Guide,” 2nd Edition, 2011. http://support.sas.com/documentation/cdl/en/statuganova/61771/PDF/default/statuganova.pdf
  23. M. Tanco, E. Viles, L. Ilzarbe and M. Álvarez, “Manufacturing Industries Need Design of Experiments (DoE),” Proceedings of the World Congress on Engineering (WCE 2007), London, Vol. 2, 2-4 July 2007.
  24. D. M. Ferrin, M. J. Miller and D. Muthler, “Six Sigma and Simulation, So What’s the Correlation,” Proceedings of the 2002 Winter Simulation Conference, 8-11 December 2002, pp. 1439-1443.
  25. M. J. McDonald, “Quality Prediction and Mistake Proofing,” Technical Report, Sandia National Laboratories, Washington, DC, 1998. doi:10.2172/650152

Journal Menu >>