1. Introduction
Control charts are used to establish and maintain statistical control of many processes. The use of a control chart requires that the engineers or analysts select a sample size, a sampling frequency or interval between samples, and the control limits for the chart. Selection of these three parameters is usually referred to as “the design of the control chart.” The design of a control chart has economic consequences in that the costs of sampling and testing, costs associated with investigating out-of-control signals and possibly correcting assignable causes, and costs of allowing nonconforming units are all affected by the choice of control chart parameters. Therefore, it is reasonable to consider the design of a control chart from an economic viewpoint. Of all control charts, especially, cumulative sum (CUSUM) control chart is known to be very effective in detecting small process shift. In the following sections, an economic CUSUM control chart based on the Taguchi’s quality loss function will be developed.
1.1 CUSUM Background and Theory
CUSUM control charts differ from the common Shewhart control charts in several respects. The basic rule in using the Shewhart control charts is to take action when a point falls outside the control limits. Unlike Shewhart control charts, CUSUM control charts adopt a rule for action that is based on all the data and not only the last few samples. This chart was proposed in 1954 by a British statistician, E. S. Page, and developed by him and other British statisticians. Practically, the tabular method is preferred to the V mask method. The principal interest in CUSUM charts or CUSUM monitoring via tabulations is the ability of the procedures to detect small changes in the underlying universe more readily than Shewhart charts. Although the Shewhart charts are especially effective for detecting larger changes in the mean level, they are not too effective in detecting small changes of 0.5 to 1.5σ. In the following sections, CUSUM charts by using the tabulation method for detecting upward shift will be explained.
1.2 CUSUM Control Charts
1.2.1 Construction of the CUSUM Chart
CUSUM charts may be constructed for individual observations or the means of rational subgroups. Consider the case of an upward shift in the process mean, and let Xi denote the ith observation on the process which follows a normal distribution with mean μ and standard deviation σ. Initialize the CUSUM L as L0 = 0, and decide on an allowance k. Generally speaking, it is reasonable to choose k relative to the size of the shift we want to detect, i.e., k = 0.5δ, where δ is the size of the shift in standard deviation units [5]. This approach would minimize the ARL1 value, that is, average run length when the process is out of control for detecting a shift of size δ for fixed ARL0, that is, average run length when the process is in control. A widely used value of k is 0.5. Once k is chosen, we need h value to give the desired in-control ARL0 performance. Hawkins [9] gives a table of k values and corresponding h values and good explanation of about CUSUM control charts.
From this information, one can compute reference value for upward shift in the process mean. The reference value is μ+kσ for upward shift.
Starting from i = 1, the CUSUM values are defined by Li = Li-1+Xi-(μ+kσ). If the value for Li is less than zero, set Li equal to zero. This value is used with the next reading, Xi+1, to compute Li+1, and so on. Li is always zero or positive value in the case of upward shift. The CUSUM chart signals an out-of-control if Li exceeds hσ where h is a decision interval. This value hσ functions exactly as the control limits of Shewhart charts in that we conclude the process is out-of-control if the point just plotted on the chart falls outside the control limit. They are different in that the control limits of Shewhart charts are usually plotted at 3σ, whereas the control limit for the CUSUM chart is at hσ.
1.2.2 Introduction of Loss Function Concept to CUSUM Control Chart
Taguchi states that quality loss occurs when the quality characteristic value does not fall on the target value. Taguchi have proposed an on-line control model. They developed a closed form solution for the selection of optimal parameters of the uniformly distributed process. The closed form solution enables us to evaluate the process control decisions much easier. However, their model sample size is always one, false alarm and search for assignable cause are ignored, and the use of the uniform distribution is too simplistic and not often found in practice.
Originally, Duncan’s [3] economic model for the optimum economic design of x bar control chart was the cornerstone for much of the subsequent research in this area. His model is simple and practical, but it is not sufficiently general, because it does not allow the process to shut down during the search for assignable causes. In their models, they assume that the process continues during the search for the assignable cause and that the process mean is centered on the target value.
In this research, a binary variable will be introduced to give more flexibility to the proposed model. By using this variable, it would be possible to define and compare the two sets of economically optimal parameters in both cases of complete shut down or no shut down during the search for assignable causes. A new economic design of CUSUM control chart, incorporating quality loss function concept, that is equally effective in detecting a small process shift will be presented.
2. Proposed Model
2.1 Symbol Description
In this cost model, a quality loss cost is incorporated into the proposed model. In order to derive the expected cost per hour, it is necessary to find the total loss cost function first. Symbols used in this model are similar to Lorenzen et al. [15] which unified the various symbols in the literatures.
-
a : fixed cost of sampling
-
b : variable cost per unit sampled
-
Y : cost of investigating a false alarm
-
W : cost of searching and repairing an assignable cause
-
c : quality loss constant (= rejection or scrap cost/tolerance limit2)
-
t : target value of the quality characteristic
-
x : quality characteristic value
-
E(C) : expected value of the total cost function
-
J : production speed per hour
-
Z : switch variable
-
Z = 1 if production continues during the search and repair
-
Z = 0 if production stops during the search and repair
-
n : sample size
-
k : reference value
-
g : sampling interval
-
h : decision limit
-
L0 : average run length when the process is in in-control state
-
L1 : average run length when the process is in out-of-control state
-
E(T) : total expected cycle time
-
T0 : expected search time when false alarm is present
-
T1 : expected time to discover the assignable cause
-
T2 : expected time to repair the process
-
T3 : time to sample and chart the item
-
τ : expected time of occurrence of the assignable cause given it occurs between the ith and (i +1)st sample
-
S : expected number of samples taken while the process is in in-control state
-
μ1, μ2 : shifted mean upward or downward
2.1.2 Model Assumptions
-
1) The process is assumed to have a normal distribution, i.e., X∼N(μ, σ2), where μ and σ2 are known.
-
2) The process continues while the search for the assignable cause and trouble shooting is in progress, but as an optional assumption, the process may stop while the search for the assignable cause and trouble shooting is in progress.
-
3) The process starts in an in-control state with mean μ, and standard deviation σ and the process mean is centered on the target value.
-
4) The process shifts due to a single assignable cause.
-
5) The assignable cause can be corrected only by the operator or some kind of action, i.e., the process is not self adjusting.
-
6) The occurrence of an assignable cause results in either a positive process shift of mean from μ to μ1 (μ1 = μ +δσ) or a negative process shift of mean from μ to μ2 (μ2 = μ-δσ). In this model, one-sided CUSUM chart is considered. If both positive and negative shifts are to be controlled, two one-sided CUSUM charts with respective values of k1, k2 (k1 > k2) and respective decision intervals of h and -h can be used.
-
7) The standard deviation of the process σ remains constant.
-
8) The time to the occurrence of assignable cause has an exponential distribution with mean of λ-1 hours.
-
9) The production speed per hour is known.
-
10) The quality loss constant, c, is known.
2.2 Definition of Cycle Time
1) Time until the occurrence of the assignable cause
Let T0 be the expected search time for a false alarm. Then the expected time spent searching during false alarms is T0 times the expected number of false alarms = T0×(S/L0), where L0 is the average run length while in in-control stateand S is the expected number of samples taken while in in-control state.
Therefore, the process stops during the search, the expected time until the occurrence of assignable cause equals λ -1+T0×(S/L0). The combined form is λ-1+(1-Z)×T0×(S/L0) where Z is the switch variable which has the value of either 0 or 1.
2) Time until the next 0sample is taken
Let τ be the expected time of occurrence of the assignable cause between the ith and (i+1)st samples, and g be the sampling interval. The derivation procedure is shown by Duncan (1956), who shows that τ is well approximated by g/2-λg2 /12, but the exact formula will be used in this study.
3) Time to analyze the sample and chart the result
Let T3 be the expected time to sample and chart one item. For a sample of n items, the time to analyze the sample and chart the result is given by n×T3.
4) Time until the chart gives an out-of-control signal
The expected time until an out-of-control signal occurs is given by g×(L1-1) where L1 is the average run length when the process has shifted to an out-of-control state.
5) Time to discover the assignable cause and repair the process
Let T1 be the expected time to discover the assignable cause and T2 be the expected time to repair the process. The time for search and repair of the assignable cause would be T1+T2.
6) Total expected cycle time
Therefore, the expected cycle time is the sum of the all time elements :
2.3 Cost Description
1) The total expected quality loss for the in-control period :
The expected loss function per unit item is F = c×{σ2+ (μ-t)2}. However, since we assume that at the start, the process mean conforms to the target value, expected loss per unit is reduced to F = c×σ2. Let’s denote this loss as Loss0. Therefore, the total quality loss cost for the in-control period is :
length of in-control period×production speed×expected quality loss per unit
where J is production speed
2) The total expected quality loss for out-of-control period :
We assume that the positive process shift is our interest and that the process mean conforms to the target value, i.e., μ = t. Thus, the expected quality loss per unit item when the process shift occurs is :
Total expected quality loss for the out-of-control period is :
length of out-of-control period×production speed×expected quality loss per unit item Let’s denote the total quality
Let’s denote the total quality loss for the out-of-control period as Loss1.
If the process continues during search time, i.e. Z = 1, then the net length of out-of-control period is E(T)-λ-1. Therefore, Loss1 is given by :
where
If the process stops during search and repair time, i.e., Z = 0, then the net length of out-of-control period is g×L1-τ+ n×T3 which is the same as E(T)-λ-1-(1-Z)×T0×(S/L0)-(T1+ T2),
where the time during false alarm is not counted as in-control period because no quality loss occurs during that time. Therefore, Loss1 is given by :
where
3) The expected sampling and testing cost :
Let’s denote the sampling and testing cost as STC. The expected sampling and testing cost can be expressed as :
where
If the process stops during the search and repair, i.e., Z = 0, the expected sampling and testing cost is :
where
4) The false alarm cost :
Let S represent the expected number of samples taken while the process is in the in-control state. Lorenzen and Vance [17] have shown that S = e-λg/(1-e-λg). Then the expected number of false alarm is S/L0 where L0 is the average run length while the process is in the in-control state. Therefore, the false alarm cost is : cost per false alarm×expected number of false alarms, which is given by Y×(S/L0)
5) The cost of searching and repairing :
Let W represent the cost of searching and repairing which involves all costs related to search for and repairing the assignable cause. Note that it is assumed that there is only one assignable cause in this model.
6) Total expected cost per cycle :
The total expected cost per production cycle is the sum of all the above cost factors. Therefore, the total expected quality loss cost function is given by :
where
-
Loss0 = λ-1×J×c×σ2
-
if the process continues during searching and repairing Loss1 = {E(T)-λ-1}×J×c×{σ2×(1+δ2)}
-
or
-
if the process stops during searching and repairing Loss1 = {E(T)-λ-1-T0×(S/L0)-T1-T2 }×J×c×(σ2+δ2×σ2)
-
if the process continues during searching and repairing STC = (a+b×n)×{E(T)}×g-1
-
or
-
if the process stops during searching and repairing STC = (a+b×n)×{E(T)-T0×(S/L0)-T1-T2}×g-1
-
E(T) = λ-1+(1-Z)×T0×(S/L0)+g×L1-τ+T1+T2+n×T3
The expected cost per hour is expressed as E(C)/E(T).
2.4 Determination of Model Parameters
The cost model becomes a function of four control variables n, g, h, and k because L0 and L1 are functions of these control variables. This is a nonlinear optimization problem in which the optimal values of n, g, h, and k must be found and the cost function is minimized. However, we can reduce number of variables to n, g, and h by choosing k relative to the size of the shift we want to detect, i.e., k = 0.5δ, where δ is the size of the shift in standard deviation units [5]. An analytical solution for these parameters is not possible, because average run length L0 and L1 cannot be obtained without knowing the values of n, h, and k. Therefore, a pattern search technique developed by Hooke and Jeeves in 1961 will be employed to obtain the optimal values of the design parameters. The Hooke and Jeeves pattern search program introduced by Kuester and Mize [16] and the ARL calculation program for CUSUM control chart developed by Vance [23] will be modified and unified to solve this problem.
2.5 ARL Calculation Algorithm
Some methods of obtaining the Average Run Length by either finding approximate expressions or by numerical techniques have been suggested in the literatures such as Page [20], Goel and Wu [7], Lucas [18], and Vance [23]. This model follows the method outlined by Goel and Wu [6]. The average run length of a CUSUM chart is given by ARL = N(0)/{1-P(0)}which is shown by Page [20]. P(0) and N(0) are the special case when z = 0 of P(z) and N(z),
where
and f(x) is the probability density function of the increments in the cumulative sum. For the case when X is normal with mean m and variance unity
where
These equations are Fredholm integral equations of the second kind and by using the method given by Kantorovich and Krylov [15]. The calculation of P(z) and N(z) can be readily performed on a digital computer. The number of Gaussian points, n, is chosen to give the desired accuracy for a given problem. For obtaining the average run length, z is set equal to zero and the values of P(0) and N(0) are computed.
3. Computer Program
A computer program written in FORTRAN was to find the minimum cost per hour and three decision variables, i.e. sample size (n), sampling interval (g), and decision limit (h). A copy of the program is included in the appendix. This program is composed mainly of two parts. The first part is to find the ARL values. In order to approximate ARL values, 48 numbers of quadrature points were used in this model. Vance’s [23] program which is adapted to be applied to the proposed model used 24 numbers. Torng et al. [22] suggest that a large number of quadrature points such as 48, or 64 should be used to obtain accurate ARL values. The quadrature points and weights which are included the subroutine program ARLCAL of this model were originally generated by DGQRUL subroutine in the IMSL FORTRAN library. The second part is to find minimum value of the cost per hour with the given cost parameters and ARL0, ARL1 values. The Hooke and Jeeves algorithm is employed to find the minimum cost function.
3.1 Application of Loss Function to other CUSUM Cost Model
Since the pioneering work of Page [20] on CUSUM control charts, the only published work on the economic design of CUSUM charts for a normal mean is by Taylor [21]. Taylor’s study provides a useful starting point. However, his work can be criticized in some ways :
He has not included sampling costs in the model. This excludes the optimization of the sample size, which is in fact highly desirable. c) He has not optimized sampling interval, which is an important means of controlling the loss cost. Therefore, in this study, Goel and Wu [7] are modified by the Taguchi loss function.
3.2 Goel and Wu’s Model
The following provides a brief summary of Goel and Wu’s [7] model process behavior. It is assumed that the process starts in a state of control at t = 0 with a mean value μa and a known constant variance σ2. A single assignable cause occurs at random and causes a shift in the process mean of a known magnitude δσ so that its new value is μr = μa+δσ. Starting from an in-control point, the time to failure is assumed to be exponentially distributed with mean λ-1. The exponential failure distribution is assumed because it is commonly encountered in many practical applications. The process stays at this level until a lack of a control is indicated by the CUSUM chart and adjustments are made to bring it back to the control level μa. No assignable cause is assumed to occur during the taking of a sample, and the process is not shut down while a search is being made for the assignable cause . Also, the costs of repairs and bringing the process back to control are not charged against the control chart procedure.
In practice, there may be cases where the process level changes are of continuous type or the process may have more than one out-of-control state. Only the two-level process is studied in his paper. The cost model by Goel and Wu is modified as follows :
so that
where
-
E(T1) : expected time to occurrence of assignable cause within sampling interval
-
E(T2) : expected time to discover assignable cause within sampling interval
-
E(T3) : expected time elapsed between the first sample after the occurrence of the assignable cause and the last sample prior to its detection
-
E(T4) : expected additional time that the process is running at the out-of-control state
Therefore,
The production cycle E(T) is :
where
-
s : sampling interval
-
Lr: average run length when the process is out-of-control
-
D : average time taken to find an assignable cause after its detection
-
e : a factor, proportional to n, representing the delay in plotting a point on the chart after the sample has been taken
-
n : sample size
3.3. Application of Loss Function to the Cost Model
-
1) The quality loss during in-control period is :
-
2) The quality loss during out-of-control period is:
-
3) Total Cost during the Production Cycle Symmetric Loss Function Case :
where
-
af : average number of false alarmsper hour = (λ×s× La)-1×E(Tc)
-
E(Tc) : expected cycle time = E(Ta)+E(Tr)
-
Y : cost of looking for an assignable cause when none exists
-
W : cost of looking for an assignable cause when one exists
-
ε : average number of times the process actually goes out-of-control = {E(Tc)}-1
-
cm : cost of maintaining the control chart = (b+c×n)/s
-
b : cost per sample of sampling and plotting
-
c : cost per unit of inspection
-
La : average run length when the process is in control
4. Numerical and Sensitivity Analysis
4.1 Numerical Example Analysis
To illustrate the new model, an example problem will be used. Consider a production line for which the process is normally distributed with mean 50, and standard deviation of 5. Production continues during the search for and repair of assignable cause. The fixed cost of sampling is $1.0 and the variable cost per sampled item is $0.05. It takes approximately 0.05 hours to take a sample and analyze each observation.
The management is only interested in the upward shift in the process. The magnitude of the process shift to be detected is one standard deviation and the process shift occurs according to an exponential distribution with a frequency of one every one hundred hours of operation, which means λ = 0.01. On the average, it takes one and a half hours to investigate a false alarm. The cost of the false alarm is $100.00. It takes half an hour to discover the assignable cause and one hour to repair the process, and the total cost is $100.00. The scrap cost for a defective unit is $25.00. Suppose that the distance of the upper specification limit for the nominal value is three standard deviations. Therefore, the quality loss constant is 0.11 ($25.0/(3×5.0)2 = 0.11).
Also assume that the production speed is 50 units per hour. The process target value is 50.0 units. The current process mean is centered on the target value. In order to determine the optimum design parameters, the example problem is solved by running the computer program, written for this purpose.
4.2 Results
The results obtained for the example problem are as follows: the optimal sample size (n) is 11, sampling interval (g) is 1.4 hours, and decision limit (h) is 1.6. The reference value (k) is the halfway between the target plus one half the magnitude of the shift we are interested in. That is 52.5. These results mean that, in order to operate a one-sided CUSUM control chart economically for this production process, an optimal sample of size 11 units should be taken at the interval of 1 hours and 24 minutes with the expected cost per hour of $145.34. If a cumulative sum value exceeds the decision limit of 1.6, it is concluded that the process is upward shifted. Average run lengths in the in-control and out-of-control state are 273.84 and 1.32, respectively.
Therefore, even if the process mean is in the in-control state, an out-of control signal, i.e., a false alarm, will be generated about every 274 samples on the average. From the average run length of 1.32, the control chart will require about 1.32 samples to detect the process upward shift; additionally, on the average, about 1.85 hours (1 hour 51 minutes) will elapse between the process shift and its detection. Table 1
4.3 Sensitivity Analysis
4.3.1 Effect of Change in the Amount of Process Shift
Three cases with process shift of 0.5σ, 1.5σ and 2.0σ were investigated with the example problem discussed in the previous section. According to <Table 2>, as the magnitude of the process shift increases, the sample size and the sampling interval decrease. That means, given the cost factors, in order to detect small process shift, we need more samples. However, sampling interval gets longer, as sampling cost gets higher. When the process is in in-control state, the ARL gets longer as the magnitude of the process shift increases, while when the process is in out-of-control state, the ARL gets shorter as the magnitude of the process shift is increased.
4.3.2 Effect of Change in the Mean Time between Failures
TABLE 3
4.3.3 Effect of Simultaneous Change in the Frequency of Process Shift and the Magnitude of Process Shift
In order to evaluate the significance of the simultaneous change in frequency of the process shiftλ-1 and magnitude of process shift σ, four cases with λ-1 equal to 0.01, 0.02, 0.05, and 0.1, respectively, were investigated in combination with the amounts of shift equal to 0.5σ, and 1.5σ. Other cost factors and model assumptions are the same as the example problem.
4.3.4 Comparison of example problem with the Goel & Wu Model
To investigate the nature of the proposed model, a comparison between the proposed and Goel & Wu model was made. In the Goel & Wu model which was modified to use quality loss concept for the comparison purpose, we investigate four cases with the magnitude of process shift increasing in steps of 0.5 sigma. The optimum values of the four cases are shown in in <Table 6>, we can see that, except for the 0.5 and 1.0 sigma case, all the optimal values of the decision parameters are the same as in the proposed model shown in <Table 3>. The 0.5 sigma case in the Goel & Wu model, the optimal sample size is larger than that of proposed model. However, the costs per hour are different from each other. This is due to the difference in the cost model. Table 4, 5
5. Conclusions
The quality of a product is measured in terms of thousands of some specific characteristics. A key component of Taguchi’s philosophy is reduction of variability. Montgomery [19] states that the Taguchi philosophy is entirely consistent with the continuous improvement philosophy of Deming and Juran. This research has provided a methodology for the design of cumulative sum control (CUSUM) charts using Taguchi quality loss concept based on the minimum cost per hour criterion. The proposed model differs from previous models in that the model assumes that quality loss is incurred even in the in-control period. The new model considers the parameters of the CUSUM chart and cost factors which are associated with the process being studied. Numerical study shows that the optimal parameters of the sample problems are consistent with those of the other models. The effects of the process shift parameter, sampling cost and the frequency of the process shift have been investigated. As process shift frequency is decreased, optimum procedure requires larger samples, taken less frequently and a smaller decision limit.