1.Introduction
In this paper, we consider a scheduling problem in which two agents compete to use a common single machine, and each agent has different performance objectives when processing a set of jobs. The processing order of one job belonging to one agent can affect the objective function of the other agent. Therefore, we need to find a schedule that optimizes two objective functions at the same time by considering the impact of scheduled jobs on the objective functions of two agents.
This scenario is called a two-agent scheduling problem, and requires special attention. If we handle this problem as a simple bi-criteria, single-agent optimization problem, we could use one of two general approaches : (i) represent the objective function as the weighted sum of two objective functions, or (ii) consider two objective functions one-by-one, using the first result as a new constraint for the second. However, neither of these methods can guarantee the optimality of the obtained solution, as the two objective functions might have different measurement units, and all jobs contribute to all criteria [2]. Therefore, two-agent scheduling problems are different from those commonly referred to as bi-criteria scheduling problems. The complexity of two-agent scheduling problems is higher than that of bi-criteria scheduling problems, and we need to devise more efficient and systematic approaches.
Two-agent scheduling problems occur in various industrial environments where two agents interact to perform their respective tasks using a common processing resource. For example, let us consider a project scheduling problem in a firm. As explained in [15], over time, multiple projects within a factory might compete to use shared common renewable resources, such as people and machinery. Each project manager is responsible for achieving good performance on his or her project, and so the use of these resources must be negotiated with other project managers. Another example can be found when tasks in a mechanical workshop require rescheduling [26]. If the main objective is to minimize the total flow time, some jobs that have not been done on one day become urgent the next. Two groups of jobs can then be assigned to two agents with different objective functions.
In recent decades, there have been a number of studies on this issue. First, the concept of two-agent scheduling problems was introduced by [4], which analyzed the computational complexity of combining two criteria into a single objective function. Then, Agnetis et al. [1] addressed the twoagent scheduling problem by considering the makespan, lateness, sum of completion times, and sum of weighted completion times, and analyzed the problem complexity while suggesting some solution algorithms. Ng et al. [25] considered the case whereby one agent minimizes the sum of completion times, while restricting the number of tardy jobs within a bound for the other agent, and showed that this problem is NP-hard under the condition of high multiplicity. Cheng et al. [9] extended the work of Agnetis et al. [1] to multiagent scheduling problems. Readers who are interested in this topic are referred to [11, 18, 32], and the references therein.
The above studies did not consider cases in which the job processing time depends on the position (or start time) of a job within the overall sequence. We call this the position- dependent processing time. In the literature, this processing time is categorized as having a learning effect or an aging effect. With a learning effect, the actual processing time of a job decreases according to its position in the sequence. For example, repeated processing of similar tasks improves workers’ skills, increasing the speed of machine operation. With an aging effect, the actual processing time of a job increases with its position in the sequence. As a practical example, we can consider the continuous casting and rolling processes in a steel plant, where the hot and cold charge process can be regarded as two agents - the processing time of the charges will increase with time due to the drop in temperature [20].
Recently, scheduling problems with position-dependent processing times have received considerable attention in the literature. Specifically, Biskup [5] introduced the concept of the learning effect into scheduling problems for a single agent. Mosheiov [24] mentioned the aging effect, and showed that a V-shaped schedule is optimal for the single-machine flowtime minimization problem [23]. Biskup [6] provided an extensive survey on scheduling problems with learning effects, and Kuo and Yang [16] considered a single-machine scheduling problem with a cyclic aging effect. Chang et al. [7] studied the learning/aging effect scheduling problem on a single machine with a common due date. For more recent results, see [30, 27, 33].
To the best of our knowledge, few studies on scheduling problems have simultaneously considered two-agent and position (or start time)-dependent processing times. These studies can be categorized by the functional representation method they use to compute the actual processing times : (i) linear function-based approaches (e.g., [20, 21, 17, 31]), and (ii) exponential function-based approaches (e.g., [10, 19, 29]). The above studies considered different combinations of objective functions, such as the (weighted) sum of completion times, (weighted) sum of tardiness, or lateness for one agent, under a constraint on the number of tardy jobs or the upper bound of the makespan for the other agent.
However, they did not consider cases in which each processed job has different learning or aging ratios. This means that the actual processing time for a job can be determined not only by the processing sequence, but also by the learning/ aging ratio, which can reflect the degree of processing difficulties in subsequent jobs. This modeling concept for the processing time was first suggested by Cheng and Wang [8]. Later, Bachman and Janiak [3] analyzed a simpler learning effect formulation for a single-machine case, and Wang and Xia [28] applied the concept to multiple-machine flowshop scheduling with an increasing series of dominating machines. These are plausible scenarios in real-life scheduling environments, and hence they deserve to be applied to the case of two-agent single-machine scheduling.
Thus, in this paper, we consider a two-agent single-machine scheduling problem with linear job-dependent position- based learning effects, where each job has a different learning ratio. Specifically, we take into account two objective functions such that one agent wants to minimize the total weighted completion time and the other agent wishes to restrict the makespan to within some upper bound. We formally define the problem by developing a mathematical formulation, and devise a branch-and-bound (B&B) algorithm to give optimal solutions. As this problem is at least binary NP-hard, we suggest efficient genetic algorithms (GAs) using different methods to generate the initial population and two crossover operations to find near-optimal solutions, and verify their performance by means of a numerical experiment.
The rest of this paper is organized as follows. In Section 2, we formally define the two-agent single-machine scheduling problem with linear job-dependent position-based learning effects by developing an appropriate mixed integer programming (MIP) model and design a B&B algorithm to find optimal solutions. In Section 3, we develop efficient GAs that give near-optimal solutions and verify the performance of these algorithms using a numerical experiment in Section 4. Finally we give our conclusions and suggestions for future work in Section 5.
2.Problem Definition and a Branchand Bound Algorithm
2.1.Problem Definition
The scheduling problem considered can be described as follows : Consider two agents A and B , each with a set of non-preemptive nX jobs JX = {J1X, J2X, ⋯JnXX}, where JiX represents the ith job of agent X , for X∈{A, B }. Any of these jobs can be processed at the beginning of the operation, and each of them is processed by a single common machine. Therefore, the two agents compete to use the machine while optimizing their own objective functions, which are dependent on the completion times of all jobs. Specifically, we assume that agent A is aiming to minimize the sum of the weighted completion times of all jobs in JA , represented by , and that agent B must ensure that the makespan CmaxB is less than a given upper bound U , where wiA and CiA are the weight and the completion time of job JiA , respectively, for i = 1, 2, ⋯, nA
Based on this problem description, we consider a special model for processing times that are position-dependent linear functions with job-dependent learning effects. Thus, for agent X∈{A, B }, the actual processing time of job JiX processed in rth position in a sequence, represented by piX(r) , has a learning effect of piX(r) = piX - rbiX , where piX is the normal processing time of job JiX and biX > 0 is the constant learning ratio of job JiX, for i ∈ IX = {1, 2, ⋯, nX} and r∈I = {1, 2, ⋯, n} ( n =nA + nB ). Different schedules may therefore give different values and CmaxB for the two agents. Specifically, for a schedule S , we represent these as (S) and CmaxB (S), where CmaxB (S) can be computed as using CkB(S) to represent the completion time of JkB, k∈IB . Furthermore, because all processing times are positive, we assume that
Using the three-field notation Ψ1|Ψ2|Ψ3, suggested by [12], we can denote the scheduling problem as
where the first component Ψ1 is the number of machines, Ψ2 describes the job characteristics, and Ψ3 contains the objective functions. Therefore, Eq. 1 represents the two-agent single-machine scheduling problem with linear job-dependent position-based learning effects.
Furthermore, by defining
we can develop the following MIP model for the problem under consideration :
s.t.
where xjr is defined as xjr = 1 if job Jj is assigned to the rth position and xjr = 0, otherwise. Cj is the completion time of job Jj, pj is the processing time of job Jj, and C|r| is the completion time of the job in position r. Eq. 2 therefore represents the total weighted completion time of agent A , which is the objective function of this agent, and Eq. 3 denotes the position-dependent processing time of job Jj under consideration of learning effects. Eq. 4 computes the completion time of the rth job, and the completion time of job Jj can be obtained by using this in Eq. 5. Eq. 6 is the upper-bound makespan condition for agent B . Eqs. 7 and 8 are job assignment conditions: each job must be assigned to only one position, and each position can have only one job. Eqs. 9 and 10 are non-negativity conditions for variables Cj and C|r|, and Eq. 11 is the binary condition for variable xjr .
Agnetis et al. [1] showed that a two-agent single-machine scheduling problem is binary NP-hard, even without job learning or aging effects. Therefore, our problem is at least binary NP-hard, necessitating the development of an efficient solution procedure to find a (near-) optimal solution.
2.2.Properties for Dominance and Feasibility
First, we develop four dominance properties based on a pairwise interchange comparison. Suppose that there are two sequences s1 = (π1, JiX, JjX, π2) and s2 = (π1, JjX, JiX, π2), where π1 and π2 represent the scheduled and unscheduled parts, respectively. In S1, two jobs JiX and JjX are in the rth and (r + 1) th positions, respectively, whereas S2 is obtained by interchanging JiX and JjX in S1. Then, by defining t as the completion time of sequence π1, we have the following properties.
Property 1
If two jobs JiX , JjX ∈JA satisfy (i) biA < bjA and (ii) wjA/wiA ≤ (pjA(r) - biA)/(piA(r) - bjA), then S1 dominates S2.
Proof : From the definition, we can compute the completion times of JiA and JjA for schedules S1 and S2 as follows.
From biA < bjA , we have CjA(S1) < CiA(S2). Since wjA/wiA ≤ (pjA(r) - biA)/(piA(r) - bjA), we have wiACiA(S1) + wjACjA(S1 ) ≤ wjACjA(S2) + wiACiA(S2). Therefore, S1 can have a smaller total weighted completion time for agent A and a faster makespan for agent B than S2. This means that S1 dominates S2. ■
Three further properties can be proved in a similar manner to Property 1.
Property 2
Given two jobs JiX, JjX∈JB , if (i) biB < bjB and (ii) t + (piB - rbiB) + (pjB - (r + 1)bjB) ≤ U, then S1 dominates S2.
Property 3
Given two jobs JiX∈JA, JjX∈JB , if (i) biA < bjB (ii) pjB(r) - biA ≥ 0 and (iii) t + (piA - rbiA)+ (pjB - (r + 1)bjB)≤ U , then S1 dominates S2.
Property 4
Given two jobs JiX∈JB , JjX∈JA , if (i) biB < bjA (ii) piB(r) - bjA ≥ 0 and (iii) t + (piB - rbiB )≤ U , then S1 dominates S2.
Additionally, we can state the following four properties regarding the feasibility of a sequence. Suppose that we have a sequence of jobs (π, πc), where the scheduled part π consists of m jobs and the unscheduled part πc consists of (n - m) jobs. Furthermore, let be the minimum actual processing time of jobs that can be scheduled in the rth position in πc, which is position (m + r) in the whole sequence.
Property 5
Given a job JiX∈JB∩π scheduled in the mth position with C[m] > U , then (π, πc) is infeasible.
Proof : Since job JiX∈JB∩π in the mth position has C[m] > U , it violates the condition for agent B ■.
Property 6
Given a job JiX∈JA∩π scheduled in the mth position with C[m] > U and a job JjX∈JB∩πc, then (π, πc) cannot generate any feasible sequence.
Proof : Since job JiX∈JA∩π and C[m] > U , the completion time of any job JjX∈JB∩πc must be greater than U . Therefore, any sequence generated from (π, πc) is not feasible. ■
Property 7
Given a job JiX∈JB∩πc, C[m] ≤ U , and C[m] + > U , then (π, πc) cannot generate any feasible sequence.
Proof : By the definition of (m + 1), we have piB(v) > (m + 1)for any job JiX∈JB∩πc and any position v in πc. Therefore, we have CiB > U for any job JiX∈JB∩πc, making any schedule generated from (π, πc) infeasible. ■
Property 8
If all jobs in πc belong to agent B , then sequence (π, πc) should be (πwhereπ is the sequence obtained by scheduling jobs in πc in non-decreasing order of bjB
Proof : Since all jobs in πc belong to agent B , we have a scheduling problem with (n - m) jobs for agent B , represented by
According to Bachman and Janiak [3], this problem can be optimized by scheduling jobs in non-decreasing order of bjB . ■
2.3.Branch-and-Bound Algorithm
Using the properties stated above, we now develop an efficient B&B algorithm. The algorithm attempts to assign jobs in a forward manner, i.e., iteratively from the first position. The basic idea is to branch a node into several nodes, each corresponding to the scheduling of one job among all possible jobs, and to bound each of them by computing the potential minimum value (i.e., lower bound) of the total weighted completion time of agent A for the schedule under consideration [14]. For each node, we perform the following process.
At each iteration of the algorithm, suppose that we have a sequence of jobs S = (π, πc), where the unscheduled part πc comprises n1 jobs for agent A and n2 jobs for agent B . Then, we have
where p[m + 1] is the actual processing time of the (m + 1) th job. Therefore, we can use Eq. 13 as the estimator of C [m + l], represented by
We can also sort the weights of the unscheduled n1 jobs into non-increasing order, as wA(1) ≥ wA(2) ≥ ⋯ ≥ wA(n1). Then, the lower bound of can be computed as follows.
(Lemma 1)
For schedule S with (π, πc), the lower bound of at the current iteration is
where wA[r] and CA[r] represent the weight and completion time of the job scheduled in rth position among all (nA - n1) jobs for agent A in π, respectively. Furthermore, CA(r) denotes the minimum completion time of a job scheduled in rth position among all n1 jobs for agent A in πc
Proof : The first term in Eq. 14 computes the total weighted completion time of (nA - n1) scheduled jobs for agent A in π. According to [13], aibi is minimized if the two sequences of numbers ai and bi are monotonic in the opposite sense. Based on this, the minimum value of the total weighted completion time that can be achieved using the unscheduled n1 jobs for agent A in πc is wA(r)CA(r) , which gives the second term in Eq. 14, resulting in a lower bound LB for S . ■
Specifically, we can compute CAr by assigning the estimated completion times to the remaining (n1 + n2) jobs for agents A and B as follows :
-
Step 1 : Set tot = 0, ca = n1, and cb = n2 .
-
Step 2 : If tot ≥ n1 + n2, or ca = 0, or cb = 0, go to Step 8.
-
Step 3 : If ≤ U and cb > 0, set CB(ab) = , cb = cb - 1, update U as U - , and go to Step 7. Otherwise, go to Step 4.
-
Step 4 : If ca > 0, set CA(ca) = , ca = ca - 1 and go to Step 5. Otherwise, go to Step 6.
-
Step 5 : If cb < n2 , update U as U - (n - tot). Go to Step 7.
-
Step 6 : If cb > 0, set CB(cb) = and cb = cb - 1.
-
Step 7 : Set tot = tot + 1 and go to Step 2.
-
Step 8 : We have CA(r) for r = 1, 2, ⋯, n1.
Since the lower bound gives the minimum value of the total weighted completion time for agent A that can be achieved after finishing the allocation procedure, it can be used to speed up the search procedure in the B&B algorithm by fathoming any non-prominent nodes. We apply a depth-first search to the B&B algorithm, the overall procedure of which can be described as follows.
-
Initialization : Set Z * = ∞ and the current level L = 0. Apply the bounding step and fathoming step described below to the whole problem. If it is not fathomed, consider this as the one remaining subproblem for the iteration below.
-
Steps for each iteration :
-
branching : If there are no remaining subproblems at the current level L , set L ←L - 1 and go to the optimality test step. Otherwise, choose the subproblem with the best lower bound LB among those remaining at the current level L . If more than one subproblem have the lowest value of LB , select one at random. Branch from the node to create new subproblems as follows:
-
If all remaining jobs belong to agent B , then apply Property 8 to generate one subproblem corresponding to the scheduling of all remaining jobs for agent B .
-
Otherwise, create subproblems corresponding to the case of scheduling one remaining, non-dominated (by Properties 1 to 4 ), and feasible (by Properties 5 to 7) job in the (m + 1)th position, where m is the last position of the scheduled part for the selected node.
-
For each new subproblem, update the set of scheduled jobs and unscheduled jobs for agents A and B .
-
-
bounding : For each new subproblem, compute C[m] and LB using Eq. 14.
-
Fathoming : For each new subproblem, apply the following two fathoming tests. A subproblem with a sequence of jobs (π, πc) is fathomed if
-
-
Optimality test : Stop and accept the current solution as optimal if there are no remaining subproblems or if L = 0. Otherwise, perform another iteration.
3.An Efficient GA
It is well known that B&B algorithms require considerable computational time to find optimal solutions to large-sized or computationally difficult problems. Therefore, we develop a GA Mitchell [22], a well-known meta-heuristic approach that can find near-optimal solutions efficiently. The components of the suggested GA are as follows.
∙Design of a chromosome : First, we design a chromosome P as an n-dimensional array representing a sequence of n jobs for two agents. Each gene represents the job number, and its position in the chromosome corresponds to the order of the jobs in a schedule. Then, we can make the following statement on the validity of the suggested structure for a chromosome.
Property 9
Feasibility condition for a chromosome A chromosome is valid and feasible as a sequence for the scheduling problem under consideration if all jobs represented by genes satisfy the following two conditions : (i) job numbers are not duplicated, and (ii) the makespan for agent B satisfies the upper-bound condition.
∙Initialization of a population : We consider three methods of generating a chromosome to initialize a population :
-
IP1 : Arrange the jobs for agent B in non-decreasing order of bjB , then schedule the jobs for agent A in order of shortest normal processing time.
-
IP2 : Schedule the jobs for agent B in non-decreasing order of bjB , then arrange the jobs for agent A in order of weighted shortest normal processing time.
-
IP3 : Create a chromosome at random. If it does not satisfy Property 9, regenerate it.
We make Q chromosomes for the initial population, where Q is an even number. During the GA procedure, we allow a modest number of duplications, and set Q = 30 × n to increase the diversity of the chromosomes.
-
Fitness cost function : We use the total weighted completion time of jobs for agent A as the fitness cost.
-
Parent selection : Chromosomes in the population are sorted in ascending order of fitness cost, and are then paired, starting from those with the smallest fitness cost. Each pair generates two offspring by applying the following reproduction process.
-
Reproduction : Reproduction generates offspring that have the parents’ characteristics. First, we use a crossover operation, exchanging some parts of two chromosomes. Specifically, we consider two kinds of crossover operation : (i) CO1 : one-point crossover, and (ii) CO2 : two-point crossover. However, in general, any crossover operation applied to a scheduling problem encounters a feasibility issue in the resulting chromosome, where there might be some duplicated genes. Therefore, we design a special method that guarantees the feasibility of the resulting chromosome. Suppose that we have two parents P1 and P2 . One-point (twopoint) crossover randomly selects one point (two points) on P1, and reorders the genes between the selected first gene and the last (second) gene according to the sequence in P2, producing a new offspring C1. If the offspring is not feasible, we reapply this procedure. Similarly, we can generate another offspring C2. For each pair of chromosomes, we apply this operation probabilistically according to some pre-specified crossover rate rc (0 < rc < 1). Otherwise, we generate two offspring as C1 = P1 and C2 = P2
Moreover, we apply a mutation operation to each chromosome by swapping two randomly selected genes. This operation is useful in escaping from local optima, and is applied probabilistically according to some pre-specified rate rm (0 < rm < 1). If the resulting chromosome is not feasible under Property 9, we repeat the procedure.
-
Reconstruction of a population : By applying the reproduction procedure to each pair of chromosomes, we can generate Q offspring. After evaluating the fitness cost of all offspring, we have a total of 2 ×Q chromosomes. Then, we can select the best Q chromosomes to reconstruct a new population of the same size.
-
Termination condition : We terminate the GA if one of the following conditions is satisfied : (i) the maximum number of iterations has been reached, or (ii) there is no improvement in the solution for a fixed number of consecutive iterations.
-
Overall procedure of the suggested GA : The overall procedure of the suggested GA consists of three parts:
-
Initialization : Initialize the GA parameters such as the crossover probability rc, mutation probability rm, and termination conditions. Start with an initial population of size Q = 30 × n, and evaluate the fitness cost for each member of the population. Sort chromosomes in ascending order of fitness, and identify that which has the minimum cost in the current population.
-
Iteration procedure : Pair up the chromosomes in order of fitness cost values, from smallest to largest, and apply the reproduction procedure. Evaluate the fitness costs of the generated offspring. Select the best Q chromosomes among the newly generated members and the current population, and form a new population for the next generation. Sort the population in ascending order of fitness cost values, and identify the solution with the minimum fitness cost.
-
Stooping rule : The GA stops with the best trial solution found so far when a fixed number of consecutive iterations do not result in any improvement. Otherwise, the procedure is repeated for the next generation.
-
4.Numerical Experiments
4.1.Experimental Design
To verify the performance of the GA described in the previous section, we designed a numerical experiment. First, we considered different scheduling problems by changing the number of jobs for each agent. Specifically, we considered n = 10, 12, 14, 16, assuming that each agent has the same number of jobs. The value of U for agent B was generated using the convex combination of the minimum value of the makespan and the minimum value of the maximum makespan, represented by V1 and V2, respectively. The value of V1 can be obtained by solving the single-agent scheduling problem in Eq. 12. V2 can be computed by the following lemma.
Lemma 2
Let CBmax,JkBB denote the makespan of a schedule S with JkB in the last position. Then, we have
and
where bi = biA, if 1 ≤ i ≤ nA and bi = bi - nA , if nA +1 ≤ i ≤ nA + nB
Proof : For any schedule S ′ obtained from schedule S by removing JkB , we have
where Cmax (S ′) is the makespan of the remaining (n - 1) jobs using schedule S ′. Furthermore, we can compute by arranging the remaining jobs in nonincreasing order of biA and bjB . ■
Now, we can obtain different values of U by defining
where we consider the three different values of α = 0.25, 0.5 and 0.75. Therefore, we have 12 system configurations defined by different pairs of (n, α) . For each configuration, we produced 50 problem instances by randomly generating normal processing times in the range 1-100, learning effects in the range 1-100, and job weights in the range 1-100. Accordingly, we considered 4×3×50 = 600 problem instances.
Moreover, we considered three methods of generating the initial population and two different reproduction operations. Hence, we have 3×2 = 6 different GAs, represented by GA (IPk,COt ) (k = 1, 2, 3, t = 1, 2). We determined the parameter values of these GAs by pretesting some randomly generated problem instances. As a result, we set rc = 0.8, rm = 0.1, and applied the stopping condition if the GA went 30 iterations without any improvement. Each instance of the scheduling problem was solved using the B&B algorithm and the six GAs.
The performance of the GAs was evaluated in terms of the percentage error, which is defined as
where TWCT is the total weighted completion time. Furthermore, we measured the number of generated nodes and CPU time using the B&B algorithm. The CPU time of the B&B algorithm was compared to that required by the GAs. To compare the six GAs, we defined the relative deviation percentage (RDP) as
Our numerical experiments considered the solutions to 600 problem instances. For each system configuration and solution method, we calculated the mean, standard deviation (stdev), maximum number of generated nodes (for the B&B), CPU time (in seconds), and % error. <Table 1> shows the experimental results. B&B could solve instances with up to 14 jobs with a reasonable CPU time. Even when n = 16, it solved instances generated using α = 0.25 efficiently. For a fixed value of n, B&B took more time to obtain the optimal solution as the value of was increased. This is because, from Eq. 15, small values of α increase the value of U , since V1 ≤ V2. Similarly, larger values of α make U smaller, giving a tight upper bound U for agent B , and making it difficult to find an optimal solution using B&B.
Overall, the GAs exhibited good performance in the sense that they gave small % errors, with a mean of less than 1% in almost every configuration. Even for the largest configuration (n, α)= (16, 0.75) , the maximum % error (using IP2 was less than 5%. In terms of computation time, the 3~9 s CPU time of the GAs is favorable over that of B&B in environments where the scheduling problem has tight time constraints.
A comparative analysis of the performance of each GA was conducted by computing the RDPs among the GAs by Eq. 16 and comparing the mean, standard deviation, and maximum values. The results are given in <Table 2>. In terms of the initial population, the method of random generation gave better RDP values, indicating that special methods for generating the initial population did not affect the quality of the final solution. The initial population only influenced the convergence of the GA when agent B had a tight upper bound U . <Figure 1> shows the average of the mean RDPs for GAs using three initial population methods and different combinations of (n, α), where GA(IPk), k = 1, 2, 3, represents GAs using IPk as the initial population method. Note that GAs using IP3 had low average RDPs in eight of the twelve configurations. Even when this does not give the best result, the difference from the best is less than 0.5%. Comparing two GAs using IP3, GA(IP3, CO2) is superior to GA(IP3, CO1) because it gave a low average RDP in more cases. Based on these observations, we claim that the GA using a random initial population and the two-point crossover operation, represented by GA(IP3,CO2), is superior to the other GAs.
The number of generated nodes and CPU time of B&B increased exponentially with the number of jobs n, whereas the CPU time of the GAs increased linearly within the range 0.5~5.5 s. For the largest configuration of (n, α)= (16, 0.75) , B&B took an average of 45,095.698 s (12.53h) to find an optimal solution, whereas the GAs took only 2~5 s, depending on the method of generating the initial population. More precisely, GAs using IP1 and IP2 for the initial population took about 2 s, while GAs using IP3 (the random initial population) required around 5.4 s.
It would also appear that the CPU times were affected by the value of and the method of generating the initial population. For a fixed value of n, small values of α (i.e., 0.25 and 0.50) result in faster convergence for those GAs using IP3. However, GAs using IP3 with α = 0.75 were slower than others. Therefore, it seems that the exploration ability of GA is restricted by the tightness of U , and a specific method for generating the initial population allows the GA to converge faster than when the initial population is generated randomly. However, fast convergence does not necessarily mean that the solution is of good quality.
5.Conclusions
In this paper, we considered the two-agent single-machine scheduling problem with linear job-dependent position-based learning effects, where each job has a different learning ratio. The objective was to minimize the total weighted completion time for one agent, with the restriction that the makespan of the other agent cannot exceed a given upper bound. Since this problem is at least binary NP-hard, we suggested some efficient GAs using different methods for generating the initial population and crossover operations. A B&B algorithm incorporating several dominance properties and a lower bound was developed to find the optimal solution. The computational results indicate that the B&B algorithm could solve instances with up to 14 jobs in a reasonable amount of CPU time, and it was found that GA(IP3,CO2), which uses a random initial population and the two-point crossover operation, performed well in almost every configuration. Overall, from the perspective of computation time, the 3~9 s of CPU time required by the GAs will generally be preferable to the CPU time of the B&B approach in environments where the scheduling problem has tight time constraints.
In future research, we will extend the current method in three ways. First, we will consider other performance objectives for the two agents, such as minimizing tardiness, weighted tardiness, or the number of tardy jobs. These are practical issues in industry, although they can make the problem more difficult. Second, we will use other learning effects so that the actual processing time is dependent on the sum of processing times of preceding jobs. This learning effect is nonlinear, making it more difficult to find optimal solutions to the two-agent scheduling problem. Finally, extensions to the multi-agent or multi-machine environments are another interesting topic for consideration.