Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 2005-0461(Print)
ISSN : 2287-7975(Online)
Journal of Society of Korea Industrial and Systems Engineering Vol.38 No.2 pp.47-55
DOI : https://doi.org/10.11627/jkise.2015.38.2.47

Rescheduling on Parallel Machines with Compressible Processing Times

Suhwan Kim†
Department of Military Operations Research, Korea National Defense University
Corresponding author ksuhwan@kndu.ac.kr
February 3, 2015 March 20, 2015 March 20, 2015

Abstract

This paper deals with rescheduling on unrelated parallel-machines with compressible processing times, assuming that the arrival of a set of new jobs triggers rescheduling. It formulates this rescheduling problem as an assignment problem with a side constraint and proposes a heuristic to solve it. Computational tests evaluate the efficacy of the heuristic.


작업시간이 압축 가능한 경우 병렬기계의 재일정계획

김 수환†
국방대학교 군사운영분석학과

초록


    1.Introduction

    It is very common that a set of jobs being processed according to a schedule must be rescheduled because of a disruption. Events that might cause a disruption include the arrival of a new job, the breakdown of a machine, or the unavailability of materials. Most deterministic machine scheduling problems deal with constant job processing times. In various real-life systems, however, processing times may be compressed by allocating additional resources, such as money, overtime, energy, fuel, catalysts, or manpower.

    This paper deals with rescheduling on unrelated parallel machines with compressible processing times and proposes a heuristic. We assume that the original set of jobs had been optimally scheduled on unrelated parallel machines to minimize the sum of completion times and the costs of compressing processing times and that a disruption occurs when a set of new jobs arrives. In rescheduling, some additional measures may be needed, because any change of the original schedule may lead to additional cost (e.g., transportation) and rescheduling effort (e.g., job reassignment, workforce rescheduling, and/or customer delivery-date changes). Rescheduling parallel machines may result in reassigning some job to a machine other than the one on which it was originally scheduled, a result we call machine reassignment. We consider the disruptions caused by machine reassignments, either as part of the objective function or by imposing a constraint to limit their number. When the cost of reassignment is difficult to estimate, we limit the number of machine reassignments by imposing a constraint, resulting in an assignment problem with a side constraint (APSC) (i.e., an NP-hard problem [13]).

    Several papers have contributed to rescheduling on parallelmachines with machine reassignment as an additional measure. For example, Alagoz and Azizoglu [4] and Azizoglu and Alagoz [6] studied the objective of minimizing the sum of completion times and either included penalties for machine reassignment in the objective function or limited the number of reassignments permitted with an additional constraint. Curry and Peters [8] limited machine reassignments by invoking a constraint and used simulation to demonstrate the effect of reassignment in the parallel-machine scheduling environment. Ozen and Azizoglu [15] considered unrelated parallel machines to minimize the total flow time, restricting the total reassignment cost.

    Compressible processing times can provide flexibility for rescheduling by managing processing times. Scheduling with compressible processing times has been studied extensively. Shabtay and Steiner [19] gave an extensive survey on this topic. Gurel and Akturk [11] studied a single machine with compressible processing times with different non-linear compressible cost functions. Yin and Wang [22] studied a single machine to minimize a cost function containing makespan, considering compressible processing times and learning effect. Yang [21] studied rescheduling on a single machine configuration with compressible processing times, allowing time compressions only for newly arrived jobs. Akturk et. al. [3] introduced parallel machine match-up scheduling problems to cope with machine disruptions when processing times of the jobs are compressible at a certain manufacturing cost. Gurel and Cincioglu [10] studied a parallel machine rescheduling problem with compressible processing times for number of disrupted jobs. They considered a bicriteria rescheduling problem to deal with a trade-off between the number of on-time jobs and manufacturing cost, admitting machine reassignment.

    Several solution approaches have been proposed for APSC. Murty [14] proposed a ranking algorithm, which generates all assignments in order of increasing cost and selects the least-cost assignment that satisfies the side constraint. Gupta and Sharma [9] developed an enumeration scheme using a search tree in which each node represents a complete, feasible solution, so that a node can be fathomed if the solution it represents is infeasible with respect to the side constraint. Aggarwal [2] presented a two-stage algorithm that applies Lagrange relaxation to eliminate the side constraint and then searches for an optimal solution using Murty’s [14] ranking algorithm. Aboudi and Jornsten [1] took a polyhedral approach, presenting several classes of valid inequalities and associated separation problems. Mazzola and Neebe [13] presented a branch-and-bound (B&B) algorithm that incorporates the subgradient method as part of a bounding procedure and a subgradient-based heuristic. Punnen and Aneja [18] observed that the Mazzola-Neebe heuristic is sensitive to the range of the coefficients in the objective function and in the side constraint. Lieshout and Volgenant [12] proposed a B&B algorithm, obtaining lower bounds by utilizing Lagrange relaxation and a heuristic by modifying the B&B algorithm. They mentioned that the B&B algorithm and their heuristic were also affected by the range of coefficients: if the ranges double, their run times also double.

    In our study, we consider rescheduling on unrelated parallel machines with compressible processing times and propose a heuristic which is less sensitive to the range of coefficients. The body of this paper is organized in four sections. Section 2 formulates the scheduling and rescheduling problems we study. Section 3 presents our LP(Linear Programming)-based heuristic for APSC. Section 4 describes a computational evaluation of the proposed heuristic. Finally, section 5 relates our conclusions and fertile directions for future research.

    2.Problem Definitions

    This section is comprised of four subsections: the first introduces the underlying scheduling problem and each of the subsequent three subsections formulates a different version of the rescheduling problem. The first version includes a cost for each machine reassignment; the second incorporates a constraint to limit the number of such reassignments; and the third minimizes compression costs and introduces a constraint to limit the sum of completion times. We assume that rescheduling is triggered by the arrival of a set of new jobs.

    2.1.Rm | xij ≤ uij | ΣCj + Σcijxij

    We use the standard three-field classification scheme (i.e., α | β | γ) of Pinedo [14] to denote the scheduling problem as Rm | xijuij | ΣCj + Σcijxij, where the α field uses Rm to denote m unrelated parallel machines; the β field gives xij, the actual amount of time compression for job j on machine i(0 ≤ xij ≤ pij ); and uij, the maximum possible time compression for job j on machine i(0 ≤ xij ≤ min (uij, pij)) ; the γ field specifies the objective, which involves the sum of completion times, Cj, and compression costs for which cij denotes the cost per unit time of compressing job j on machine i. This subsection formulates this scheduling problem. Throughout this paper, we use the following notation :

    Indices

    i machines

    j jobs

    k sequence positions

    Index sets

    I unrelated parallel machines; I = {1, ⋯, m }

    J jobs to be processed; J = {1, ⋯, n}

    K sequence positions; K = J

    Parameters

    Cj completion time of job j

    cij cost per unit time of compressing job j on machine i

    pij normal (i.e., uncompressed) processing time of job j on machine i

    uij maximum possible time compression of job j on machine i

    Decision variables

    xij amount of processing time compression of job j on machine i (i.e., actual processing time is (pij - xij))

    yijk 1 if job j is scheduled as the kth to last job on machine i, 0 otherwise.

    For this scheduling problem, Alidaee and Ahmadian [5] proved the following Proposition, which is based on a result of Vickson [20] :

    Proposition In the Rm | xijuij | ΣCj + Σcijxij scheduling problem, there exists an optimal schedule in which xij = 0 if cijk and xij = uij if cij < (i.e., its processing time is not changed or fully compressed from the normal processing time (pij), depending on the sequence position of the job.) (Proof. see [5]). □

    Based on the Proposition, the underlying scheduling problem can be formulated by extending the Rm | ΣCj scheduling model as follows [17] :

    P 0 : Min i I j J k K kp ij + min 0 , c ij k μ ij y ijk
    (1)
    s . t . i I k K y ijk = 1 j J
    (2)
    j J y ijk 1 i I , k K
    (3)
    y ijk 0 , 1 i I , j J , k K
    (4)

    Objective function (1) minimizes the sum of completion times and compression costs. Constraints (2) assure that each job is scheduled exactly once and (3) assure that each position on each machine is taken by at most one job. Constraints (4) require all decision variables to be nonnegative. Problem P (0) is the well-known assignment problem, which is totally unimodular, so that an optimal integer solution can be prescribed by linear programming in polynomial time.

    2.2.Rm | xij ≤ uij, ai | ΣCj + Σcijxij + Σj ∈ JR wij Dij

    We assume that a disruption occurs when a (index) set JN of new jobs arrives at time t as a schedule for the (index) set of original jobs, Jo , is ongoing. In particular, jobs in (index) set JP are in process at time t, each on a scheduled machine, which will become available only after completing the job it is processing.

    For example, in <Figure 1>, a set of new jobs arrives at time t, machines q and r are processing scheduled jobs, which have remaining processing times aq and ar , respectively. In this case, machines q and r will be available at times t + aq and t + ar, respectively. Upon arrival of a set of new jobs, delete the sets of completed (JC) and in-process (JP) jobs from Jo, and re-index remaining jobs in set J′ ={Jo\ (JPJC)}∪JN. Without loss of generality, we assume throughout this paper that a set of new jobs arrives at time 0.

    In this and the following subsections, we formulate important variations of the rescheduling problem. First, we introduce some additional notation :

    JR index set of originally scheduled jobs remaining after deleting in-process and completed jobs; Jo\ (JPJC)

    JN index set of newly arrived jobs

    J′ index set of jobs to be rescheduled; JRJN

    ij machine to which job j ∈ JR is assigned in the initial schedule

    ai time at which machine i will be available, given that new jobs arrive at time 0

    wij cost of reassigning job j ∈ JR onto machine i ∈ I \{ij}

    Dij 1 if job j ∈ JRis reassigned onto machine i ∈ I \ { ij}, 0 otherwise.

    We penalize the amount of inconvenience reassignments cause as a part of the objective function or limit the permissible number of reassignments (i.e., amount of inconvenience) by incorporating a constraint. For example, failure to meet a delivery date may affect customer goodwill to a degree that is hard to estimate and, therefore, the amount of inconvenience can be limited by incorporating a constraint to restrict the number of reassignments permitted. However, if machine reassignment incurs additional costs that can be relatively easily valued, the objective function can reflect the trade-off between the costs associated with rescheduling and inconvenience.

    The first version of the rescheduling problem can be designated by Rm | xijuij, ai | ΣCjcijxij + Σj ∈ JR wijDij, which includes an additional measure, Σj ∈ JRwijDij, the total weighted cost of reassigning jobs in set JR , each of which had been scheduled originally on a different machine. To clarify, the reschedule can assign job j to the same machine (ij) on which it had been scheduled or reassign it to a different machine (i ∈ I \{ij}). We assume that a job that is being processed when rescheduling occurs remains on the machine that is processing it and its processing time is not compressed.

    Since the symbol ai in the β field represents the time at which machine i will be available; ai = 0 if machine i is not processing a job when the set of new jobs arrives at time 0. If a job j ∈ JR is reassigned, Σi ∈ I\ {ij}Dij = Σi ∈ I\ {ij}Σk ∈ J yijk = 1; otherwise, job j remains assigned to machine ij according to the original schedule, so that Σi ∈ I\ {ij}Dij = Σi ∈ I\ {ij}Σk ∈ J yijk = 0 Rescheduling problem Rm | xijuij, ai | ΣCjcijxij + Σj ∈ JRwijDij can be formulated as assignment problem P(1).

    P 1 : Min i I j J k K kp ij + a 1 + min 0 , c ij k u ij y ijk + j J R i I i j k K w ij y ijk
    (5)

    s.t. (2)-(4).

    Constraints are those of the assignment problem, so that P(10) can be solved in polynomial time.

    2.3.Rm | xij ≤ uij, ai, Σj ∈ JRDij ≤ U | ΣCj + Σcijxij

    When the cost of reassignment, (e.g., wij in P(1)), is difficult to estimate, it may be more appropriate to limit the number of jobs reassigned by imposing a constraint. This version of the rescheduling problem incorporates a constraint to impose a limitation, U , to restrict machine reassignments.

    P 2 : Min i I j J k K kp ij + a 1 + min 0 , c ij k u ij y ijk

    s.t. (2)-(4).

    j J R i I i j k K y ijk U .
    (6)

    Constraint (6) invokes the reassignment limitation. This APSC does not possess the unimodularity property; it is NP-hard [16].

    2.4.Rm | xij ≤ uij, ai, ΣCj ≤ T | Σcijxij

    In this subsection, we consider that compressible processing times can provide flexibility for rescheduling by managing processing times. Rescheduling upon the arrival of a set of new jobs may increase the sum of completion times, even of originally scheduled jobs. Furthermore, it may be important to allow processing time compression to satisfy customer demands in a more timely manner; thus, total cost may increase due to time compression.

    This problem can be designated by Rm | xijuij, aiCjT | Σcijxij, which invokes limit, T , on the sum of completion times. To formulate this problem as an integer program, we introduce the following decision variables, assuming that all processing and compression times are integers.

    yijkl 1 if job j is scheduled as the kth to last job on machine i and its time compression is the integer value l , 0 otherwise

    We also formulate this problem as an APSC as follows :

    P (3):

    Min i I j J k K c ij l y ijkl
    (7)
    s . t . i I j J k K y ijkl = 1 j J
    (8)
    j J I U ij y ijkl 1 i l , k K
    (9)
    i I j J k K l U ij a i + k p ij l y ijkl T
    (10)
    y ijkl 0 , 1 i I , j J ,   k K , l U ij .
    (11)

    where Uij denotes the set of possible compression times for job j on machine i(i.e., Uij = {0, 1, ⋯, uij}). Objective function (7) minimizes the total cost of processing time compression. Constraints (8) and (9) assure that each job is scheduled exactly once and each position on each machine is taken by at most one job, respectively. Constraint (10) assures that the sum of completion times does not exceed the limit, T . P(3) has also an additional constraint (10) that makes this problem NP-hard [16].

    3.The Linear Relaxation Heuristic

    In this section, we propose a heuristic, the Linear Relaxation Heuristic (LRH), which is based on the linear relaxations of p(2) and p(3) . Our LRH can be applied to any model that possesses the totally unimodular property, augmented with a single side constraint. It solves the linear relaxation once and combines the logic of sensitivity analysis applied to the right hand side (RHS) of the side constraint with the fact that all extreme points are 0, 1 integer vectors. Note that sensitivity analysis of the RHS specifies the amount by which the RHS value can change before the current basis changes.

    To explain the idea of our heuristic, we present an example with feasible region F = {(x1, x2, x3) | x1 + x2 + x3 ≤ 2.5, x1 ≤ 1, x2 ≤ 1, x3 ≤ 1, x1 ≥ 0, x2 ≥ 0, x3 ≥ 0} and cost vector c = (- 0.5, - 1, - 1). <Figure 2> shows that the hyperplane represented by the first constraint in the definition of F cuts through the (unit hypercube) polytope described by the other inequalities, forming three fractional extreme points (i.e., (0.5, 1, 1), (1, 0.5, 1), and (1, 1, 0.5)).

    In <Figure 2>, as the RHS of inequality x1 + x2 + x3 ≤ 2.5 is gradually decreased, the basis does not change but the objective value slowly increases until the RHS reaches the value 2 and the translated hyperplane (represented by the dotted lines) intersects extreme points (0, 1, 1), (1, 0, 1), and (1, 1, 0). Since sensitivity analysis of the RHS provides lower and upper bounds on the RHS coefficient within which the current basis remains optimal, we can use it to determine how much the RHS must change to translate the hyperplane far enough to intersect an adjacent, extreme point, which must be integer, because the underlying polytope is, by assumption, totally unimodular. Like APSC, all extreme points of feasible region F are composed of 0, 1 integer vectors except any fractional extreme points formed by the additional constraint.

    Our heuristic solves the linear relaxation of APSC once, then performs a sensitivity analysis of the RHS to determine the value that changes the basis, equivalently, translates the hyperplane until it intersects at least one feasible, integer extreme point of the assignment polytope. If the solution is fractional, the RHS value of the side constraint is replaced with the lower bound of the range and then the heuristic finds an adjacent extreme-point solution by multiplying the inverse of the current basis by the replaced RHS vector. Since the constrained optimal solution to APSC is not always adjacent to the current fractional solution, the heuristic does not always guarantee the optimal solution.

    Let z*LP, δ*, and lRu denote the optimal solution value, the optimal dual variable value corresponding to the side constraint, and the sensitivity range for the RHS value R of the side constraint in the linear relaxation of APSC, respectively.

    Property If l = R or lR and δ* = 0, the current solution is optimal for the APSC.

    Proof. If l = R , the side constraint intersects the underlying polytope at an optimal integer extreme point. Now, suppose lR and δ* = 0. If the RHS of the side constraint in APSC is decreased by R - l , the new objective value is z*LP + δ* (L - R) . Since δ* = 0, there is no change in the objective value. This happens when the side constraint is redundant (e.g., if the orientation of the objective function is such that the solution to the linear relaxation is an integer extreme point).

    Letting B and 1 R be the optimal basis of the linear relaxation of APSC and RHS vector with R as the RHS value of the side constraint, respectively, we now detail LRH in application to solve APSC.

    LRH Solve the linear relaxation of APSC. If infeasible, STOP : APSC is infeasible. Else, Apply sensitivity analysis to determine lower and upper bounds for the RHS coefficient of the side constraint, lRu, that allow the current basis to remain optimal,

    If the solution is integer, STOP : current solution is optimal.

    Else, compute B 1 1 l to find the new extreme-point solution.

    4.Computational Evaluation

    In this section, we evaluate the efficacy of LRH using randomly generated instances of problems p(2) and p(3). We program LRH, using the C/C++ language and the CPLEX 12.1.0 callable library, and perform all computations on a Dell PC running Windows 7 with a 2.67 GHz CPU and 2 GB memory. The first subsection describes test instances and the second relates test results.

    4.1.Instance Generation

    This section reviews instance generation. For problem p(2) , our tests involve three factors, the numbers of machines, old jobs, and new jobs with three levels of each factor : 2, 5, and 10; and 100, 200, and 600; and 10, 50, and 100; respectively. This results in 27 cases as shown in <Table 1> and we generate 20 instances of each randomly.

    We generate each independent instance randomly using the discrete uniform distribution (DU); DU [10, 110] for processing times (pij), costs of processing time compressions (cij), and machine available times (ai); DU [0, 0.5 × pij ], for the maximum possible processing time compressions (uij); and, finally, DU [0.2 × |JR |, 0.8 × |JR |] for the machine reassignment limit (U) in which |JR| denotes the number of old jobs to be rescheduled.

    For problem p(3) , we consider the first factor with three levels (2, 5 and 10 machines) but limit the second and third factors to one level (100 old and 50 new jobs) because this problem incorporates variables with four indices and, hence, causes memory issues if more jobs are involved. <Table 2> displays those instances.

    Again, we generate 20 independent instances for each case as described above, except that we generate the limit on the sum of completion times as follows

    T = DU 0.6 × t min , 0.9 × t min

    where tmin = Min{Σi ∈ I Σj ∈ J Σk ∈ J Σl ∈ Uij k pij yijkl : s.t. (8), (9) and (11)} (i.e., the minimum value of the sum of completion times when processing time compression is disallowed). Depending on the value of tmin, problem (3) can be infeasible. Thus, we repeat instance generations until we get 20 feasible instances.

    4.2.Computational Results

    All computational results are averages taken over the 20 instances associated with each case. <Table 3> displays test results for cases 1 through 27. Column Nopt gives the number of instances out of 20 that our heuristic solves to optimality. Columns RDavg , RDmax , and RDmin provide average, maximum, and minimum relative deviations from the optimal value, respectively.

    The relative deviation RD is computed as follows : RD = z H z IP z IP × 100 in which zH and zIP denote the objective value of LRH and the optimal solution, respectively. Columns TRavg , TRmax , and TRmin , give average, maximum, and minimum values of CPLEX run time divided by LRH run time, respectively. The last column gives Tavg , the average run time of LRH in each case.

    Rows 1~9, 10~18, and 19~27 give results for 2, 5, and 10 machines, respectively. The underlined rows following rows 9, 18 and 27 report the average number of optimal solutions out of 20, RDavg and TRavg for all instances involving 2, 5, and 10 machines, respectively.

    LRH optimally solves about 78.9% of our test instances. For remaining instances, it provides near-optimal solutions, resulting in an average relative deviation over all test instances of 0.06%. Column TRavg shows that LRH solves instances about 3.7 times faster than CPLEX, on average. In particular, LRH solves instances that have many jobs relative to the number of machines much faster than CPLEX. For example, LRH solves case 9 about 9.8 times faster than CPLEX, on average.

    We also compare cases in which the ratio of the number of old jobs to the number of new jobs is relatively high with cases for which the ratio is small, in particular, equal to 1. For this test, we adapt medium-sized cases 11, 14, and 17, which all have 50 new jobs but 100, 200, and 600 old jobs, giving ratios of 2, 4 and 12, respectively. To specify new cases, 11d, 14d, and 17d, we change the numbers of old and new jobs, keeping the same total number of jobs in each case (150, 250 and 650, respectively), to make each ratio equal to 1, resulting in 75, 125, and 325 old as well as new jobs in these cases, respectively.

    <Table 4> assesses the difference that the ratio of the number of old jobs to the number of new jobs makes on run time and solution quality by comparing cases 11, 14, and 17 with cases, 11d, 14d, and 17d, respectively. Columns, Tavg, Tmax , and Tmin , give average, maximum, and minimum values of LRH run time, respectively. <Table 4> shows that neither the run time nor the solution quality of our heuristic is affected by the ratio of the number of old jobs to the number of new jobs, even though the ratios differ widely.

    <Table 6> gives test results obtained for cases 28, 29, and 30. These results show that very few of instances are solved optimally but that average relative deviation barely exceeds 1%. However, LRH solves test instances about 25 times faster than CPLEX, on average.

    We use cases of problem P(3) to test the sensitivity of our heuristic relative to the values of the objective function and the RHS coefficients. Run time of the heuristic proposed by Lieshout and Volgenant [12] double if the ranges of coefficients double. Thus, we double the values of the coefficients of the objective function and the RHS to define cases cases, 28d, 29d, and 30d, respectively, then generate 20 independent instances of each case to compare test results with cases 28, 29, and 30, respectively.

    <Table 6> compares results associated with cases i and id for i = 28, 29, 30. The first six columns and the last three columns correspond to those of <Table 2> and <Table 4>, respectively. <Table 6> shows that solution qualities (i.e., RDavg, RDmax , and RDmin) and Tavg for classes i and id, i = 28, 29, 30 are almost the same, even though coefficients of the objective function and the RHS are doubled in the latter three cases. We have similar results for problem P(2).Table 5

    5.Conclusions and Recommendations for Future Research

    This paper addresses the problem of rescheduling on unrelated parallel machines with compressible processing times, where the objective is to minimize the sum of completion times, compression costs, and, perhaps, job reassignment costs. We formulate this problem as an APSC, and we introduce a heuristic for problems of this type.

    Tests show that our heuristic solves some 78.9% of our randomly generated test instances for cases 1 through 27 to optimality, providing near-optimal solutions for all test instances, and requires much shorter run time than CPLEX, especially on instances of problem P(3) : Rm | xijuij, ai, ΣCjT | Σcijxij. Furthermore, our tests show that our heuristic is not sensitive to the values of objective function coefficients or of the RHS coefficient of the side constraint. Tests show that our heuristic resolves rescheduling problems effectively and that it can be applied to find a good solution quickly for cases in which finding an optimal solution may require prohibitive run time.

    Our heuristic can be also applied to any problem with a set of constraints that possesses the unimodular property, augmented with a side constraint. We plan to test such numerical instances. In addition to this, we will try to modify our heuristic to solve APSC with multiple side constraints, perhaps by aggregating them.

    Figure

    JKISE-38-47_F1.gif

    Jobs in-process at Time

    JKISE-38-47_F2.gif

    Constraint x1 + x2 + x3 ≤ 2.5 with Different RHS Values

    Table

    Test Instances for Problem P(2)

    Test Instances for Problem P(3)

    Test Results for Cases 1 through 27

    Comparison of Cases i and id for i =11, 14, 17

    Comparison of Cases i and id for i = 28, 29, 30

    Test Results for Cases 28, 29 and 30

    Reference

    1. Aboudi R , Jornsten K (1990) Resource constrained assignment problems , Discrete Appl Math, Vol.26; pp.175-191
    2. Aggarwal V (1985) A lagrangean-relaxation method for the constrained assignment problem , Compu Oper Res, Vol.12 (1) ; pp.97-106
    3. Akturk MS , Atamturk A , Gurel S (2010) Parallel machine match-up scheduling with manufacturing cost considerations , J Sched, Vol.13; pp.95-110
    4. Alagoz O , Azizoglu M (2003) Rescheduling of identical parallel machines under machine eligibility constraints , Eur J Oper Res, Vol.149; pp.523-532
    5. Alidaee B , Ahmadian A (1993) Two parallel machine sequencing problems involving controllable job processing times , Eur J Oper Res, Vol.70; pp.335-341
    6. Azizoglu M , Alagoz O (2005) Parallel-machine rescheduling with machine disruptions , IIE Trans, Vol.37; pp.1113-1118
    7. Cheng TCE , Chen ZL , Li CL (1996) Parallel-machine scheduling with controllable processing times , IIE Trans, Vol.28; pp.177-180
    8. Curry J , Peters B (2005) Rescheduling parallel machines with stepwise increasing tardiness and machine assignment stability objectives , Inter J Prod Res, Vol.43 (15) ; pp.3231-3246
    9. Gupta A , Sharma J (1981) Tree search method for optimal core management of pressurized water reactors , Compu Oper Res, Vol.8, 4 (4) ; pp.263-266
    10. Gurel S , Cincioglu D (2015) Rescheduling with controllable processing times for number of disrupted jobs and manufacturing cost objectives , Int J Prod Res, Vol.53, 9 (9) ; pp.2751-2770
    11. Gurel S , Akturk MS (2007) Considering manufacturing cost and scheduling performance on a CNC turning machine , Eur J Oper Res, Vol.177; pp.325-343
    12. Lieshout PMD , Volgenant A (2007) A branch-andbound algorithm for the singly constrained assignment problem , Eur J Oper Res, Vol.176; pp.151-161
    13. Mazzola J , Neebe AW (1986) Resource-constrained assignment scheduling , Oper Res, Vol.34; pp.560-572
    14. Murty K (1968) An algorithm for ranking all the assignments in order of increasing cost , Oper Res, Vol.16; pp.682-687
    15. Ozlen M (2009) Azizouglu, M , Generating all efficient solutions of a rescheduling problem on unrelated parallel machines. Int J Prod Res, Vol.47, 19 (19) ; pp.5245-5270
    16. Papadimitriou CH , Steiglitz K (1982) Combination Optimization : Algorithms and Complexity, Prentice Hall,
    17. Pinedo ML (2008) Scheduling : Theory, Algorithm, and Systems, Prentice Hall,
    18. Punnen A , Aneja YP (1995) A tabu search algorithm for the resource-constrained assignment problem , J Oper Res Soc, Vol.46; pp.214-220
    19. Shabtay D , Steniner G (2007) A survey of scheduling with controllable processing times , Discrete Appl Math, Vol.155; pp.1643-1666
    20. Vickson RG (1980) Two single-machine sequencing problems involving controllable job processing times , AIIETrans, Vol.12; pp.258-262
    21. Yang B (2007) Single machine rescheduling with new jobs arrivals and processing time compression , Inter J Adv Manuf Tech, Vol.34; pp.378-384
    22. Yin N , Wang XY (2011) Single-machine scheduling with controllable processing times and learning effect , nt J Adv Manuf Technol, Vol.54; pp.743-748