|
|
Measurement Uncertainty Analysis
Abstract: This paper discusses key elements that should be included in an
uncertainty analysis report. Recommended practices for using measurement
and uncertainty units and decimal digits are also presented. An example
analysis report is provided to illustrate how these elements can be combined to
clearly indicate how an uncertainty analysis was conducted.
Castrup, S.:
Comparison of Methods for Establishing Confidence Limits
and Expanded Uncertainties,
Proc. of 2010 Measurement Science Conference, Pasadena, CA, March 2010. (23 pgs)
Abstract:
This paper examines three methods for computing confidence limits and expanded
uncertainties: 1) GUM, 2) Convolution and 3) Monte Carlo Simulation. The first
method combines error distribution variances, while the second and third methods
directly combine error distributions via mathematical or numerical techniques.
Four direct measurement scenarios are evaluated and the intervals computed from
each method are compared.
Castrup, H.: A
Welch-Satterthwaite Relation for Correlated Errors, Proc. of
2010 Measurement Science Conference, Pasadena, CA, March 2010. Revised May 2010 (23 pgs)
Abstract:
Working from a derivation for the degrees of freedom of Type B uncertainty
estimates, a variation of the Welch-Satterthwaite relation is developed that is
applicable to combinations of errors that are both s-independent and
correlated. Expressions for each case are provided for errors arising from both
direct and multivariate measurements.
Castrup, H.:
Error Distributions and Other Statistics, ISG Technical Document, January 2009. (18 pgs)
Castrup, H. and Castrup, S.:
Uncertainty Analysis for
Alternative Calibration Scenarios, Presented at the NSCLI Workshop and
Symposium, Orlando, FL, August 2008. (27 pgs)
Abstract:
The calibration of a unit-under-test attribute is examined within the context of
four scenarios. Each scenario yields a calibration result and a description of
measurement process errors that accompany this result. This information is
summarized and then employed to obtain an uncertainty estimate in the
calibration result.
Castrup, H.:
Estimating Parameter Bias Uncertainty, Presented at the
Measurement Science Conference, Anaheim, CA, March 2006. (39 pgs)
Abstract:
Type B, ANOVA
and Bayesian methods are presented for estimating the uncertainty in the
bias of measurement references and the uncertainty in parameter deviations
from nominal.
Castrup, H.:
Note on Type B Degrees of Freedom Equation, ISG Technical Document,
September 2004. (3 pgs)
Castrup, H.:
Selecting and Applying Error Distributions in Uncertainty Analysis,
Presented at the Measurement Science Conference, Anaheim, 2004. (32 pgs)
Castrup, S.:
Why Spreadsheets are Inadequate for Uncertainty Analysis,
Presented at the 8th Annual ITEA Instrumentation Workshop, Lancaster, CA,
May 2004 (20 pgs). Updated Jan 9, 2010.
Abstract: This paper
discusses key questions and concerns regarding the
development of measurement uncertainty analysis worksheets or custom add-in programs
for Excel or Lotus spreadsheet applications.
Castrup, H.:
Estimating and Combining Uncertainties,
Presented at the 8th Annual ITEA Instrumentation Workshop, Lancaster, CA,
May 2004 (7 pgs).
Abstract: This paper presents a general
model for
measurement uncertainty analysis that can be applied to the analysis of
direct measurements and multivariate measurements.
Castrup, S.:
A Comprehensive Comparison of Uncertainty Analysis Tools,
Presented at the Measurement Science Conference, Anaheim, CA, January 2004
(27 pgs). Updated Dec 5, 2009.
Abstract: This paper presents a comprehensive review and
comparison of several measurement uncertainty analysis software products and freeware that have been
developed in the past several years. Methodology, functionality, user-
friendliness, documentation, technical support and other key criteria are addressed. Suggestions for selecting and using the various measurement
uncertainty analysis
tools are also provided.
Castrup, H.: Estimating
Bias Uncertainty, Proceedings of the NCSLI Workshop
& Symposium, Washington D.C., July 2001. (20 pgs)
Castrup, H.:
Distributions for Uncertainty Analysis, Proceedings of the
International Dimensional Workshop, Knoxville, TN, May 2001. (Revised
April 2007,
12 pgs)
Castrup, H.:
Estimating
Bias Uncertainty, Proc. NCSLI Workshop & Symposium, July
29-August 2, 2001, Washington, DC.
Abstract: Methods for estimating the uncertainty in the bias
of reference parameters or artifacts is presented. The methods are
cognizant of the fact that, although such biases persist from
measurement to measurement within a given measurement session, they are,
nevertheless, random variables that follow statistical distributions.
Accordingly, the standard uncertainty due to measurement bias can be
estimated by equating it with the standard deviation of the bias
distribution. Since the measurement bias of a reference is a dynamic
quantity, subject to change over the calibration interval, both
uncertainty growth and parameter interval analysis are also discussed.
Measuring Equipment Specifications
Castrup, S.:
Applying
Measuring and Test Equipment Specifications, Proceedings of NCSLI Workshop and
Symposium, San Antonio, TX, July 2009. (24 pgs)
Abstract:
Castrup, S.:
Interpreting and
Applying Equipment Specifications, Presented at the NCSLI Workshop and
Abstract: Manufacturer specifications are used for measuring and test equipment (MTE) selection for a given application. In addition, manufacturer specified tolerances are used to compute test uncertainty ratios and estimate bias uncertainties. This paper discusses how manufacturer specifications are obtained, interpreted and used to assess MTE performance and reliability. Recommended practices are presented and illustrative examples are given for combining specifications. |
Measurement Quality Assurance
Abstract: Probability density
functions are developed for false accept risk, false reject risk and other
measurement quality metrics (MQMs). The relevance of each MQM to
testing activities and equipment users are shown graphically. It is
argued and illustrated that, from a equipment user's standpoint,
false accept risk is the MQM of greatest value for decision making
purposes.
Abstract: A methodology is presented for the cost-effective determination of appropriate measurement assurance standards and practices for the management of calibration and test equipment. The methodology takes an integrated approach in which quality, measurement reliability and costs at each level in a test and calibration support hierarchy are linked to their counterparts at every other level in the hierarchy.
Measurement Decision Risk Analysis
Castrup, H.: Risk Analysis Methods for Complying with NCSLI Z540.3, Proc. NCSLI Conference, St. Paul, MN August 2007. (20 pgs) Abstract: This paper provides the mathematical framework for computing false accept risk and gives guidelines for managing in-tolerance compliance decisions. The latter includes methods for developing test guardbands that correspond to specified risk levels. A discussion of the impact of measurement reliability and measurement uncertainty on false accept risk is included. A brief discussion is given on the application of the fallback 4:1 uncertainty ratio requirement.
Castrup, H.: Bayesian Risk Analysis, ISG Technical Document, April 2007. (8 pgs) Abstract: This document applies Bayesian analysis methods, to perform a risk analysis for accepting a UUT parameter based on a priori knowledge and on the results of measuring the parameter during testing or calibration. Combining the measurement results with a priori knowledge provides the information that makes Bayesian analysis possible. The use of Bayesian methods to develop test guardbands is briefly addressed and an argument is presented that states that, if risks are computed using Bayesian methods, guardbands become superfluous.
Castrup, H.: Analyzing Uncertainty for Risk
Management, Proc. ASQC 49th Annual Quality Congress, Cincinnati, OH, May 1995.
(13 pgs)
Castrup, H. et al: NASA Reference Publication 1342, Metrology - Calibration and Measurement Processes Guidelines, June 1994. (300+ pages).
Abstract: This publication is intended to assist in meeting the
metrology requirements of NASA Quality Assurance handbooks by system
contractors. This publication addresses the entire measurement process,
from the definition of measurement requirements through operations that provide
data for decisions.
Castrup, H.:
Analytical Metrology SPC
Methods for ATE Implementation,
Proceedings of the NCSLI Workshop
& Symposium, August 1991, Albuquerque, NM. (16 pgs) Castrup, H.: Calibration Intervals from Variables Data, presented at the NCSLI 2005 Workshop & Symposium, Washington D.C., August 2005. (Revised January 2006, 12 pgs). Abstract: This paper describes a methodology for determining calibration intervals from variables data. A regression analysis approach is developed and algorithms are given for setting parameter calibration intervals from the results of variables data analysis.
Castrup, H. and Johnson, K.: Techniques for
Optimizing Calibration Intervals, Proc. ASNE Test & Calibration
Symposium, Arlington, VA, December 1994. (6 pgs) |
Home | Company | Products | Services | Training | Updates | Freeware | Articles | Help | Links
Integrated Sciences Group
14608 Casitas Canyon Road • Bakersfield, CA 93306
Phone 1-661-872-1683 • FAX 1-661-872-3669
Privacy Policy Return Policy Shipping Info
Page Updated May 14, 2018