There are important references that have used degradation data to assess reliability. Nelson discussed a special situation in which the degradation measurement is destructive only one measurement could be made on each item. Nelson , chap. In the literature, there are two major aspects of modeling for degradation data.
One approach is to assume that the degradation is a random process in time. Doksum used a Wiener process model to analyze degradation data. Their model and inference methods were illustrated with a case application involving self-regulating heating cables. An alternative approach is to consider more general statistical models. Degradation in these models is modeled by a function of time and some possibly multidimensional random variables. These models are called general degradation path models.
They considered a nonlinear mixed-effects model and used a two-stage method to obtain point estimates and confidence intervals of percentiles of the failure time distribution.
- Days of Destiny-The Jewish Year under a Chassidic Microscope;
- Biodegradation: Natural and Synthetic Materials!
- The Harbrace Guide to Writing, Concise Edition!
- Bioreactor Engineering Research and Industrial Applications I: Cell Factories!
Lu et al. Su et al. A data set from a semiconductor application was used to illustrate their methods. They used these properties to obtain point estimates and approximate confidence intervals for percentiles of the failure time distribution. They applied the proposed methods to metal film resistor and metal fatigue crack length data sets. The former seemed to reduce the affection of different patterns of degradation paths and improve the estimation results of time-to-failure distribution providing much tighter confidence intervals.
Random fatigue crack growth was illustrated in detail as an example of degradation data problem. In a degradation test, measurements of performance are obtained as it degrades over time for a random sample of test units. Thus, the general approach is to model the degradations of the individual units using the same functional form and differences between individual units using random effects.
The model is:. It is also assumed that y and t are in appropriately transformed scales, if needed. For example, y might be in log-degradation and t in log-time. The proportion of failures at time t is equivalent to the proportion of degradation paths that exceed the critical level D f by time t.
Books on Probability Theory and Applications
Thus, it is possible to define the distribution of the time-to-failure T for model l as follows:. For simple path models the distribution function F T t can be expressed in a closed form. For many path models, however, this is not possible. Usually, one will have to evaluate the resulting forms numerically. So, the problem remains on the parameter estimation. Simulation studies showed that the method compared well with the more computationally intensive methods. In other words, these functions were developed for the specific case where the random effects follow a Multivariate Normal Distribution.
In all of them, the failure time distribution F T t was estimated numerically using Monte Carlo simulation.
Books on Probability Theory and Applications
In addition, the authors presented two other methods of degradation data analysis, namely the approximate and the analytical method. Both of them are difficult to apply when the degradation path model is nonlinear and has more than one random parameter. The methods described so far rely on maximum likelihood or least squares estimation of the model parameters the so called "classical inference" procedures and Monte Carlo simulation. An alternative approach to degradation data analysis is to use Bayesian methods.
In particular, because reliability is a function of the parameters of the degradation model, the posterior distribution for reliability at a specified time is straightforward to obtain from the posterior distribution of the model parameters. Hamada used a Bayesian approach for analyzing a laser degradation data but the author did not compare the results with the non Bayesian approaches available.
The outline of the article is as follows. Three methods based on "classical" inference as well as the Bayesian approach are briefly presented in Section 3. The "Train Wheel degradation data" is analyzed in Section 4. Conclusions and final comments end the paper in Section 5. Wheel failures, which account for half of the train derailments, cost billions of dollars to the global rail industry.
Wheel failures also accelerate rail deterioration.
- Services on Demand!
- Computer Programs for Spelling Correction: An Experiment in Program Design!
- Bayesian statistics - Scholarpedia?
- Stay ahead with the world's most comprehensive technology and business learning platform.!
- Red Hat Enterprise Linux Troubleshooting Guide!
- The Death in the Trenches: Grant at Petersburg.
To minimize rail breaks and help avoid catastrophic events such as derailments, railways are now closely monitoring the performance of wheels and trying to remove them before they start badly affecting the rails. Most railways keep in a database detailed descriptions of all maintenance actions performed on their trains.
The data used in this article is just a small subset of such database. It refers to a larger study being conducted by a Brazilian railway company. The complete database includes, among other information, the diameter measurements of the wheels, taken at thirteen 13 equally spaced inspection times:. A wheel's location in a particular car within a given train is specified by an axle number 1, 2, 3, 4 - number of axles on the car and the side of the wheel on the axle right or left. In this preliminary study, special attention was given to the CA1 cars because these are the ones responsible for pushing the other three cars in a given train.
It is known that the operating mode of such cars accelerates the degradation process of its wheels. Therefore, the data used in this paper refers to the diameter measurements of the wheels located on the left size of axle number 1 of each one of the CA1 cars. The diameter of a new wheel is mm.
- Bayesian Statistical Modelling, 2nd Edition - AbeBooks - Congdon Peter: ;
- Structural vibration: analysis and damping.
- Prophecies and Promises The Book of Mormon and the United States of America?
- The French Wars of Religion, 1562–1629 (New Approaches to European History);
- Books on Probability Theory and Applications;
When the diameter reaches mm the wheel is replaced by a new one. Figure 1 presents the degradation profiles of the 14 wheels under study.
Professor and Chair
Instead of plotting the diameters itself, the curves were constructed using the degradation observed at each evaluation time t i. Note that three out of fourteen units studied achieved the threshold level during the observation time. The main purpose here is to use the degradation measurements to estimate the lifetime distribution F T t of those train wheels.
Once this distribution is obtained, one can get estimates of other important characteristics such as the MTTF mean-time-to-failure, or, specifically, mean covered distance , quantiles of the lifetime distribution, among others. The profiles are shown in Figure 1. Statistical Methods for Degradation Data Analysis. In this section, "classical" and Bayesian methods are presented. First, the four methods based on "classical" inference are briefly presented Section 3.
Next, in Section 3. The main purpose of a statistical analysis of degradation data is to get an estimate of the failure time distribution F T t. Therefore, for a given degradation path model, two main steps are involved in such analysis: 1 the estimation of model parameters and 2 the evaluation of F T t. For some particularly simple path models, F T t can be expressed as a simple function, and simple methods, such as the approximate and the analytical, can be used to estimate F T t.
These methods are described in Sections 3. The two-stage and the numerical methods are more complete and make the estimation of F T t possible in any situation. Consider the general degradation model 1 , given in Section 1. The approximate method comprises two steps. The first one consists of a separate analysis for each unit to predict the time at which the unit will reach the critical degradation level D f corresponding to failure.
These times are called "pseudo" failure times. In the second step, the n "pseudo" failure times are analyzed as a complete sample of failure times to estimate F T t. Formally the method is as follows. This can be done by using least squares linear or nonlinear, depending on the functional form of the degradation path. The approximate method is simple and intuitively appealing. Note that this method considers the model parameters as fixed. Moreover, the approximate method presents the following problems: it ignores the errors in the prediction of the "pseudo" failure times i and does not consider the errors involved in the observed degradation values, the distribution of the "pseudo" failure times does not generally correspond to the one that would be indicated by the degradation model and, in some cases, the volume of degradation data collected can be insufficient for estimating all the model parameters.
In these scenarios, it might be necessary to fit different models for different units to predict the "pseudo" failure times. For some simple path models, F T t can be expressed in a closed form. The following example provides an illustration of such a case. Other probability density functions can be used along with the same procedures in order to obtain the failure time distribution F T t.
To carry out the two-stage method of parameter estimation, the following steps should be implemented. In addition, an estimator of the error variance , obtained from the i th unit is the mean square error. Assume that, by some appropriate reparameterization e. Point estimation of F T t. The estimate of T t of F T t can be evaluated to any desired degree of precision by using Monte Carlo simulation.
This is done by generating a sufficiently large number of random sample paths from the assumed path model with the estimated parameters and using the proportion failing as a function of time as an estimate of F T t. The basic steps are:. These values can then be used in the steps 3 and 4 described below. Estimate F T t from the simulated empirical distribution.
Bayesian Statistical Modelling - Peter Congdon - Google книги
A new set of worked examples is included. This feature continues in the new edition along with examples using R to broaden appeal and for completeness of coverage. Stay ahead with the world's most comprehensive technology and business learning platform. With Safari, you learn the way you learn best. Such developments together with the availability of freeware such as WINBUGS and R have facilitated a rapid growth in the use of Bayesian methods, allowing their application in many scientific disciplines, including applied statistics, public health research, medical science, the social sciences and economics.
The second edition: Provides an integrated presentation of theory, examples, applications and computer algorithms. Discusses the role of Markov Chain Monte Carlo methods in computing and estimation. Includes a wide range of interdisciplinary applications, and a large selection of worked examples from the health and social sciences. Features a comprehensive range of methodologies and modelling techniques, and examines model fitting in practice using Bayesian principles. Chapter 3 The Major Densities and their Application.
Chapter 6 Discrete Mixture Priors. Chapter 7 Multinomial and Ordinal Regression Models.