Table 2 Parameter settings for the single-objective algorithms
Algorithm | Parameter description | Parameter settings and rationale |
|---|---|---|
GA | Population, N | We set \(N=100\) to maintain adequate genetic diversity while balancing evaluation budget. The same is set for PSO and DE. This is a practical approach that supports performance comparison as espoused in [44] |
Selection and survival | Based on [45, 46], we employed tournament selection elitism, with tournament size k = 2 (i.e., keep best 2). This applied mild pressure while preserving the top two solutions | |
Crossover and mutation | The simulated binary crossover (SBX) with polynomial mutation are used [47, 48]. The crossover probability, \({p}_{c}=1.0\), and mutation probability is \({p}_{m}= 1/n\) where n is the number of decision variables. The distribution indexes for crossover and mutation are, \({\eta }_{c}=20\) and \({\eta }_{m}=20,\) respectively | |
Stopping criterion | This was based on number of function evaluations (nEvals), and we set it to \(nEvals=\text{1500}.\) This provides pragmatic convergence as evidenced in the convergence plot form the trial runs | |
PSO | Population size (N) | \(N=100\) |
Inertia weight \(\left(w\right)\) and learning factors \({c}_{1}\) and \({c}_{2}\) | As recommended in [49, 50], a linearly decreasing strategy was adopted, \(w:0.9\to 0.4;\) and \({c}_{1}={c}_{2}=2\) | |
Stopping criterion | \(nEvals=\text{1500}\) | |
DE | Population size (N) | \(N=100\) |
Sampling | The Latin hypercube sampling (LHS) was adopted due to its ability to maximize coverage of the parameter space and enhance the efficiency of the optimization process [51] | |
Mutation strategy | We adopt the DE/rand/1/bin strategy which is the default in pymoode and is recommended in [52] | |
Cross-over rate \((CR)\) and scale factor \((F)\) | Based on [52], we selected a moderate \(CR=0.5\) and randomized \(F=\left(0.0, 0.9\right)\) to improve robustness | |
Dither and jitter | We set dither to vector since selecting \(F\) randomly for each difference vector can improve convergence; jitter is set to False to ensure no perturbation is added to the difference vectors [53, 54] | |
Stopping criterion | \(nEvals=1500\) |