State-of-the-Art Measurement Analysis Software, Training and Consulting Services
Celebrating over 31 YRS of Excellence and Innovation 1987-2018
    Google 
internet isgmax.com

 

 
 

Measurement Analysis / Analytical Metrology Articles and Papers

 

Below are a list of downloadable articles and papers written by Integrated Sciences personnel to assist in the understanding of important measurement analysis and quality assurance methods and principles.  Topics include measurement uncertainty analysis, measurement decision risk analysis, statistical measurement process control, calibration interval analysis and the interpretation and application of measuring equipment specifications.  The downloadable files are in Adobe Acrobat pdf format to ensure that proper text fonts, equations, and images are retained.  

 

To view these documents, you will need Acrobat Reader 6.0 or higher.  Click the Acrobat button below to download a free copy of the Acrobat pdf file reader.  If you have internet firewall software running, then the button may not be visible.  If so, click Acrobat Reader to download a free copy.  To download the pdf files directly to your computer, right-click the paper title and select Save Target As.

 

 

 

 
 

Measurement Uncertainty Analysis
Castrup, H. and Castrup S., Uncertainty Analysis: What is it we’re uncertain of?, Proc. NCSLI Workshop & Symposium, Sacramento, July 2012. (31 pgs)
Abstract: This paper provides guidelines for identifying error sources which are relevant to uncertainty analysis and those which are not.  The guiding question in such identifications is “what is it we’re uncertain of?”  This question is explored within the context of various activities with differing immediate objectives.  Such activities include conformance testing, measurement decision risk analysis, capability statement development, hypothesis testing and equipment parameter tolerancing.


Castrup, S.: Important Elements of an Uncertainty Analysis Report, Proc. of NCSLI Workshop and Symposium, Providence, RI, July 2010. (14 pgs).

Abstract: This paper discusses key elements that should be included in an uncertainty analysis report.  Recommended practices for using measurement and uncertainty units and decimal digits are also presented.  An example analysis report is provided to illustrate how these elements can be combined to clearly indicate how an uncertainty analysis was conducted.

 

Castrup, S.: Comparison of Methods for Establishing Confidence Limits and Expanded Uncertainties, Proc. of 2010 Measurement Science Conference, Pasadena, CA, March 2010. (23 pgs)

Abstract:  This paper examines three methods for computing confidence limits and expanded uncertainties: 1) GUM, 2) Convolution and 3) Monte Carlo Simulation.  The first method combines error distribution variances, while the second and third methods directly combine error distributions via mathematical or numerical techniques.  Four direct measurement scenarios are evaluated and the intervals computed from each method are compared.

 

Castrup, H.:  A Welch-Satterthwaite Relation for Correlated Errors, Proc. of  2010 Measurement Science Conference, Pasadena, CA, March 2010. Revised May 2010 (23 pgs)

Abstract: Working from a derivation for the degrees of freedom of Type B uncertainty estimates, a variation of the Welch-Satterthwaite relation is developed that is applicable to combinations of errors that are both s-independent and correlated.  Expressions for each case are provided for errors arising from both direct and multivariate measurements.

 

Castrup, H.: Error Distributions and Other Statistics, ISG Technical Document, January 2009. (18 pgs)

 

Castrup, H. and Castrup, S.: Uncertainty Analysis for Alternative Calibration Scenarios, Presented at the NSCLI Workshop and Symposium, Orlando, FL, August 2008. (27 pgs)

Abstract: The calibration of a unit-under-test attribute is examined within the context of four scenarios.  Each scenario yields a calibration result and a description of measurement process errors that accompany this result.  This information is summarized and then employed to obtain an uncertainty estimate in the calibration result.

Dubro, D. and Castrup, H., Calculating Propagated Uncertainties Using Higher Order Taylor Terms In the Uncertainty Analysis, Proc. NCSLI Workshop & Symposium, St. Paul, August 2007. (15 pgs)
Abstract: This paper investigates the difficulty and properties of such a non-linear calculation by extending a measurement model out to three orders of a Taylor series expansion in two variables and compares the results with the G.U.M.'s mysterious suggestion of what the next most significant terms might look like. 

 

Castrup, H.: Estimating Parameter Bias Uncertainty,  Presented at the Measurement Science Conference, Anaheim, CA, March 2006. (39 pgs)

Abstract: Type B, ANOVA and Bayesian methods are presented for estimating the uncertainty in the bias of measurement references and the uncertainty in parameter deviations from nominal.

 

Castrup, H.: Note on Type B Degrees of Freedom Equation, ISG Technical Document, September 2004. (3 pgs)

 

Castrup, H.: Selecting and Applying Error Distributions in Uncertainty Analysis, Presented at the Measurement Science Conference, Anaheim, 2004. (32 pgs)
Abstract: Guidelines are given for selecting and applying various statistical error distribution that have been found useful for Type A and Type B analyses.  Both traditional and Monte Carlo methods are outlined.  Also discussed is the estimation of the degrees of freedom for both Type A or Type B estimates.

Castrup, S.: Why Spreadsheets are Inadequate for Uncertainty Analysis,  Presented at the 8th Annual ITEA Instrumentation Workshop, Lancaster, CA, May 2004 (20 pgs). Updated Jan 9, 2010.

Abstract: This paper discusses key questions and concerns regarding the development of measurement uncertainty analysis worksheets or custom add-in programs for Excel or Lotus spreadsheet applications.

 

Castrup, H.: Estimating and Combining Uncertainties,  Presented at the 8th Annual ITEA Instrumentation Workshop, Lancaster, CA, May 2004 (7 pgs).

Abstract: This paper presents a general model for  measurement uncertainty analysis that can be applied to the analysis of direct measurements and multivariate measurements.

 

Castrup, S.: A Comprehensive Comparison of Uncertainty Analysis Tools, Presented at the Measurement Science Conference, Anaheim, CA, January 2004 (27 pgs). Updated Dec 5, 2009.

Abstract: This paper presents a comprehensive review and comparison of several measurement uncertainty analysis software products and freeware that have been developed in the past several years.  Methodology, functionality, user- friendliness, documentation, technical support and other key criteria are addressed. Suggestions for selecting and using the various measurement uncertainty analysis tools are also provided.

  

Castrup, H.: Estimating Bias Uncertainty, Proceedings of the NCSLI  Workshop & Symposium,  Washington D.C., July  2001. (20 pgs)
Abstract:  Methods for estimating the uncertainty in the bias of reference parameters or artifacts are presented. The methods are cognizant of the fact that, although such biases persist from measurement to measurement within a given measurement session, they are, nevertheless, random variables that follow statistical distributions. Accordingly, the standard uncertainty due to measurement bias can be estimated by equating it with the standard deviation of the bias distribution. Since the measurement bias of a reference is a dynamic quantity, subject to change over the calibration interval, both uncertainty growth and parameter interval analysis are also discussed.

 

Castrup, H.: Distributions for Uncertainty Analysis, Proceedings of the International Dimensional Workshop,  Knoxville, TN, May  2001. (Revised April 2007, 12 pgs)
Abstract:  This paper describes statistical distributions that can be applied to both Type A and Type B measurement errors and to equipment parameter biases. Once the statistical distribution for a measurement error or bias is characterized, the uncertainty in this error or bias is computed as the standard deviation of the distribution. For Type A estimates, the distribution or "population" standard deviation is estimated by the sample standard deviation. For Type B estimates, the standard deviation is computed from limits, referred to as error containment limits and from probabilities, referred to as containment probabilities. The degrees of freedom for each uncertainty estimate can often be determined, regardless of whether the estimate is Type A or Type B.

 
Castrup, H.: Uncertainty Growth Estimation in UncertaintyAnalyzer, ISG Technical Document, March 2002 (4 pgs). Revised June 2008.

 

Castrup, H.: Estimating Bias Uncertainty, Proc. NCSLI Workshop & Symposium, July 29-August 2, 2001, Washington, DC.

Abstract:  Methods for estimating the uncertainty in the bias of reference parameters or artifacts is presented.  The methods are cognizant of the fact that, although such biases persist from measurement to measurement within a given measurement session, they are, nevertheless, random variables that follow statistical distributions.  Accordingly, the standard uncertainty due to measurement bias can be estimated by equating it with the standard deviation of the bias distribution.  Since the measurement bias of a reference is a dynamic quantity, subject to change over the calibration interval, both uncertainty growth and parameter interval analysis are also discussed.
 
Castrup, H.: An Investigation into Estimating Type B Degrees of Freedom, ISG Technical Document, August 2000. (4 pgs)
 
Castrup, H.: Estimating Category B Degrees of Freedom, Proc. Measurement. Science Conf., Anaheim, CA, January 2000. (8 pgs)
Abstract: A method is presented for estimating uncertainties in cases where samples of data are unavailable. The method includes a formalism that provides a structure for extracting information from the measurement experience of scientific or technical personnel. This information is used to both estimate uncertainties and to approximate the degrees of freedom of the estimate.  Using these results, confidence limits are developed that obviate the need for arbitrary coverage factors and misleading expanded uncertainties.
 
Castrup, H.: Uncertainty Analysis and Parameter Tolerancing, Proc. NCSL Workshop & Symposium, Dallas, TX, July 1995. (16 pgs)
Abstract:  An uncertainty analysis methodology is described that is applicable to establishing and testing equipment parameter tolerances. The methodology develops descriptions of measurement uncertainty that relate directly to whether parameters will be acceptable for intended applications.  An example is presented that illustrates the concepts involved.

Castrup, H.: Practical Methods for Analysis of Uncertainty Propagation, Proc. 38th Annual ISA Instrumentation Symposium, Las Vegas, NV, April 1992. (25 pgs)
Abstract: An uncertainty analysis methodology is described that is relevant to equipment tolerancing, analysis of experimental data, development of manufacturing templates and calibration standards. By assembling the methodology from basic measurement principles, controversies regarding uncertainty combination are avoided.  

 

Measuring Equipment Specifications

Castrup, S.: Applying Measuring and Test Equipment Specifications, Proceedings of  NCSLI Workshop and Symposium, San Antonio, TX, July 2009. (24 pgs)

Abstract: MTE specifications are used to estimate measurement uncertainty, establish tolerance limits for calibration and testing, and evaluate false accept risk and false reject risk.  This paper provides illustrative examples of how MTE specifications are used to estimate parameter bias uncertainties, compute test tolerance limits, determine in-tolerance probability, and establish calibration intervals.

 

Castrup, S.: Interpreting and Applying Equipment Specifications, Presented at the NCSLI Workshop and Symposium, Washington D.C., August 2005. (15 pgs)

Abstract: Manufacturer specifications are used for measuring and test equipment (MTE) selection for a given application.  In addition, manufacturer specified tolerances are used to compute test uncertainty ratios and estimate bias uncertainties. This paper discusses how manufacturer specifications are obtained, interpreted and used to assess MTE performance and reliability.  Recommended practices are presented and illustrative examples are given for combining specifications.

Measurement Quality Assurance
Castrup, H.: An Examination of Measurement Decision Risk and Other Measurement Quality Metrics. Proc. of NCSLI Workshop and Symposium, San Antonio, TX, July 2009. (31 pgs)  Best paper award winner.

Abstract: Probability density functions are developed for false accept risk, false reject risk and other measurement quality metrics (MQMs).  The relevance of each MQM to testing activities and equipment users are shown graphically.  It is argued and illustrated that,  from a equipment user's standpoint, false accept risk is the MQM of greatest value for decision making purposes.

Castrup, S., Measurement Assurance Best Practices for MTE Calibration, Presented at the NCSLI Workshop & Symposium, Orlando, 2008.
Summary: Best practices for assuring measurement quality and reliability during MTE calibration are presented and illustrated.

Castrup, H. and Castrup, S., Measurement Quality Assurance Principles and Methods, Proc. Measurement Science Conference, March 2009.
Presentation Summary: An overview is presented of the analytical methodologies needed to ensure measurement quality in design, manufacturing, calibration and testing. Methodologies include those of the fields of measurement uncertainty analysis, measurement decision risk analysis, calibration intervals analysis and end-to-end cost optimization. Included are discussions on the status of development and documentation of analytical and related methodologies, including NCSLI RP-1, RP-12 and RP-18.

Castrup, S. and Castrup, H., “Implementation of Measurement Quality Assurance Principles and Methods,” Presented at the Measurement Science Conference, Anaheim, March 2009.
Presentation Summary: This presentation presents and integrates the ingredients that comprise effective measurement quality assurance.  Topics and concepts include

  • Measurement Quality Assurance

  • Technical Publications & Resources

  • Data Requirements

  • MQA Control Criteria

  • Analysis Procedures

  • Analysis Software

  • Technical Proficiency Requirements

  • MQA Management Goals


Castrup, H.: Applying Measurement Science to Ensure End Item Performance, Proc. Measurement Science Conference, Anaheim, CA  March 2008. (15 pgs)

Abstract: A methodology is presented for the cost-effective determination of appropriate measurement assurance standards and practices for the management of calibration and test equipment.  The methodology takes an integrated approach in which quality, measurement reliability and costs at each level in a test and calibration support hierarchy are linked to their counterparts at every other level in the hierarchy.

 

Measurement Decision Risk Analysis 
Castrup, H., Decision Risk Analysis for Alternative Calibration Scenarios, Proc. NCSLI Workshop & Symposium, Orlando, August 2008. (7 pgs)
Abstract: The results of calibration of unit-under-test (UUT) attributes and estimates of measurement process uncertainty are employed in the calculation of measurement decision risk within the context of four measurement scenarios.  The decision risks of interest are those that are relevant to meeting Z540.3 requirements as well as internal quality control criteria.  They include unconditional false accept risk (UFAR), conditional false accept risk (CFAR) and false reject risk (FRR).  UFAR is computed both as a program-level and bench-level control metric.  Examples are given to illustrate concepts and procedures.

 

Castrup, H.: Risk Analysis Methods for Complying with  NCSLI Z540.3, Proc. NCSLI Conference, St. Paul, MN  August 2007. (20 pgs)

Abstract: This paper provides the mathematical framework for computing false accept risk and gives guidelines for managing in-tolerance compliance decisions.  The latter includes methods for developing test guardbands that correspond to specified risk levels.  A discussion of the impact of measurement reliability and measurement uncertainty on false accept risk is included.  A brief discussion is given on the application of the fallback 4:1 uncertainty ratio requirement.

 

Castrup, H.: Bayesian Risk Analysis, ISG Technical Document, April 2007. (8 pgs)

Abstract: This document applies Bayesian analysis methods, to perform a risk analysis for accepting a UUT parameter based on a priori knowledge and on the results of measuring the parameter during testing or calibration.  Combining the measurement results with a priori knowledge provides the information that makes Bayesian analysis possible. The use of Bayesian methods to develop test guardbands is briefly addressed and an argument is presented that states that, if risks are computed using Bayesian methods, guardbands become superfluous.

 

Castrup, H.: Analyzing Uncertainty for Risk Management, Proc. ASQC 49th Annual Quality Congress, Cincinnati, OH, May 1995. (13 pgs)
Abstract: A structured approach to uncertainty analysis is described that is applicable to product quality assessment and risk management.  Expressions are derived that incorporate estimated uncertainties in risk analysis to determine whether product parameters will be acceptable for intended applications.
 
Castrup,  H.: Uncertainty Analysis for Risk Management, Proc. Measurement. Science Conf., Anaheim, CA, January 1995. (27 pgs)
Abstract: Measurement errors and error models are reviewed and measurement process error components are described. Measurement decision risks are estimated based on the results of an uncertainty analysis example and risk management considerations are outlined.  Classical measurement decision risk is also discussed, with special emphasis on the impact of process uncertainty on false accept and false reject risks. A new method for computing risks is given in Appendix B.

 

Castrup, H. et al: NASA Reference Publication 1342, Metrology - Calibration and Measurement Processes Guidelines, June 1994. (300+ pages).

Abstract: This publication is intended to assist in meeting the metrology requirements of NASA Quality Assurance handbooks by system contractors.  This publication addresses the entire measurement process, from the definition of measurement requirements through operations that provide data for decisions.
 
Statistical Process Control
Castrup, H.: Risk-Based Control Limits, Proc. Measurement. Science Conf., Anaheim, CA, January 2001. (9 pgs)
Abstract:  A methodology is presented for the development of SPC control limits for measurement processes.  The methodology employs both Bayesian and traditional measurement decision risk concepts to establish control limits that flag whether measuring processes are in or out of control relative to the specifications of the artifacts they measure. The methodology has particular relevance for calibration and testing.

 

Castrup, H.: Analytical Metrology SPC Methods for ATE Implementation, Proceedings of the NCSLI  Workshop & Symposium,  August 1991, Albuquerque, NM. (16 pgs)
Abstract:  Probability theory is employed to develop an analytical metrology SPC methodology which is amenable to implementation in automated testing environments.  The methodology can be used to obtain  in-tolerance probability estimates  and  bias  estimates  for  both  test  systems  and  units  under  test  without  recourse  to  external  measurement standards.  This makes it particularly applicable in remote environments where measuring instruments are expected to function without calibration for extended periods of time.

 
Calibration Interval Analysis
Castrup, H.: Calibration Intervals from Variables Data, presented at the NCSLI 2005 Workshop & Symposium, Washington D.C., August 2005. (Revised January 2006, 12 pgs).

Abstract:  This paper describes a methodology for determining calibration intervals from variables data.  A regression analysis approach is developed and algorithms are given for setting parameter calibration intervals from the results of variables data analysis.

 

Castrup, H. and Johnson, K.: Techniques for Optimizing Calibration Intervals, Proc. ASNE Test & Calibration Symposium, Arlington, VA, December 1994. (6 pgs)
Abstract: Concepts central to calibration interval analysis are described. Guidelines are presented that permit optimizing intervals with respect to both life cycle support costs and costs due to sub-optimal equipment performance. Special focus is given to mathematical reliability modeling methods and to calibration history data management requirements.

Wyatt, D. and Castrup, H: Managing Calibration Intervals, Proc. NCSL Workshop & Symposium, Albuquerque, NM, August 1991. (20 pgs)
Abstract: This paper presents guidelines for implementing calibration interval management systems as components of computerized general calibration management systems. In addition to optimizing calibration interval management, following these guidelines can significantly contribute to improving compliance with MIL-STD 45662A and ISO-9000.

Castrup,  H.: Calibration Requirements Analysis System, Proc. NCSL Workshop & Symposium, Denver, CO, 1989. (20 pgs)
Abstract: This paper reports on recent developments which promise to yield a user capability for establishing quality assurance standards and practices that include accuracy ratio criteria, measurement reliability targets, test tolerance limits vs. performance limits, and equipment adjustment or renewal policy. 

Jackson, D. and Castrup,  H.: Reliability Analysis Methods for Calibration Intervals: Analysis of Type III Censored Data, Proc. NCSL Workshop & Symposium, Denver, CO, July 1987. (12 pgs)
Abstract: Unfortunately, calibration history data, on which calibration intervals are based, do not provide precise time to failure (i.e., out-of-tolerance) information.  Consequently, methods are required which extend beyond classical reliability analysis techniques.  This paper offers such an extension by providing a maximum likelihood estimation technique for the analysis of data characterized by unknown failure times.

 

 

  Home  |  Company  |  Products  |  Services  |  Training  |  Updates  |  Freeware  |  Articles  |  Help  |  Links 

 

Integrated Sciences Group
14608 Casitas Canyon Road • Bakersfield, CA 93306
Phone 1-661-872-1683 • FAX 1-661-872-3669

 

Privacy Policy     Return Policy     Shipping Info

 

Page Updated May 14, 2018