01-17-2024, 03:22 PM
Enhanced Random Search Optimization Ver 2
============================
Random search optimizationn is very easy to code. This comes at the cost of producing results that are "close" to the correct answers. This HP Prime program implements an enhanced version of random search optimization. The program performs random searches in lower and upper bounds specified for each variable. The enhancement are as follows:
1. When each improved optimum point is found, the program performs a secondary (a.k.a. 'local') random search in the region around the improved optimum point.
2. After half of the number of iterations are performed, the program narrows the search region around the current best point.
3. This new version makes sure that the reduced bounds do not cross the originial bounds. Use this new version if crossing the initial bounds is prohobited by the nature of the problem you are dealing with. Otherwise, you can use either version of the program.
The program uses function RANDOMSEARCH() to perform the optimization. This function has the following parameters:
1) The parameter lb is a row vector that specifies the lower bounds for each variable.
2) The parameter ub is a row vector that specifies the upper bounds for each variable.
3) The parameter maxIt1 is the maximum number of 'global' random search iterations.
4) The parameter maxIt2 is the maximum number of 'local' random search iterations. The function performs maxIt2 iterations to search around each and every improved improved optimum point.
The function returns the row matrix that contains the best "guesses" for the optimum values of the variables. The function FX contains the code for the optimized function. In the example below fx(x) = sum((x(i)-i)^2).
To find the values for the four optimum variables for function FX and store the results in matrix M9, I specify the following arguments:
1) The lower bounds row vector of [0, 0, 0, 0].
2) The upper bounds row vector of [5, 5, 5, 5].
3) A maximum of 100000 global iterations.
4) A maximum of 1000 local iterations (performed EACH time a improved approximation for the optimum is found).
M9 := RANDOMSEARCH([0,0,0,0],[5,5,5,5],100000,1000)00,1000)
A sample output located in the first row of matrix M9 is:
0.995571271159, 2.00109660047, 2.98724324198, 4.0215903877
The actual exact results are:
[1, 2, 3, 4]
Remember since the optimization process heavily depends of random values, the results will vary each time you run the program.
You can fine-tune the coded constants that shrink the search regions lb, ub, lb2, and ub2. You can also remove the parameter maxIt2 and declare it (in a LOCAL clause) inside the function as a fraction of parameter maxIt1. For example, you can use the statement maxIt2:=IP(maxIt1/10). Here is the listing for the program. Finally you edit the ivalue of the critical teration, m, that reduces the initial lower and upper bounds. For example, to set m to 75% of maxIt2 use m:=IP(3/4*maxIt1) or m:=IP(0.75*maxIt1).
============================
Random search optimizationn is very easy to code. This comes at the cost of producing results that are "close" to the correct answers. This HP Prime program implements an enhanced version of random search optimization. The program performs random searches in lower and upper bounds specified for each variable. The enhancement are as follows:
1. When each improved optimum point is found, the program performs a secondary (a.k.a. 'local') random search in the region around the improved optimum point.
2. After half of the number of iterations are performed, the program narrows the search region around the current best point.
3. This new version makes sure that the reduced bounds do not cross the originial bounds. Use this new version if crossing the initial bounds is prohobited by the nature of the problem you are dealing with. Otherwise, you can use either version of the program.
The program uses function RANDOMSEARCH() to perform the optimization. This function has the following parameters:
1) The parameter lb is a row vector that specifies the lower bounds for each variable.
2) The parameter ub is a row vector that specifies the upper bounds for each variable.
3) The parameter maxIt1 is the maximum number of 'global' random search iterations.
4) The parameter maxIt2 is the maximum number of 'local' random search iterations. The function performs maxIt2 iterations to search around each and every improved improved optimum point.
The function returns the row matrix that contains the best "guesses" for the optimum values of the variables. The function FX contains the code for the optimized function. In the example below fx(x) = sum((x(i)-i)^2).
To find the values for the four optimum variables for function FX and store the results in matrix M9, I specify the following arguments:
1) The lower bounds row vector of [0, 0, 0, 0].
2) The upper bounds row vector of [5, 5, 5, 5].
3) A maximum of 100000 global iterations.
4) A maximum of 1000 local iterations (performed EACH time a improved approximation for the optimum is found).
M9 := RANDOMSEARCH([0,0,0,0],[5,5,5,5],100000,1000)00,1000)
A sample output located in the first row of matrix M9 is:
0.995571271159, 2.00109660047, 2.98724324198, 4.0215903877
The actual exact results are:
[1, 2, 3, 4]
Remember since the optimization process heavily depends of random values, the results will vary each time you run the program.
You can fine-tune the coded constants that shrink the search regions lb, ub, lb2, and ub2. You can also remove the parameter maxIt2 and declare it (in a LOCAL clause) inside the function as a fraction of parameter maxIt1. For example, you can use the statement maxIt2:=IP(maxIt1/10). Here is the listing for the program. Finally you edit the ivalue of the critical teration, m, that reduces the initial lower and upper bounds. For example, to set m to 75% of maxIt2 use m:=IP(3/4*maxIt1) or m:=IP(0.75*maxIt1).
Code:
EXPORT FX(x,n)
BEGIN
LOCAL i, y;
y := 0;
FOR i FROM 1 TO n DO
y := y + (x(1,i)-i)^2;
END;
RETURN y;
END;
EXPORT RANDOMSEARCH(lb,ub,maxIt1,maxIt2)
BEGIN
LOCAL bestX, bestFx, iter, iter2;
LOCAL i, j, m;
LOCAL lb2, ub2, x, f;
LOCAL lstDim, n;
lstDim := SIZE(lb);
n := lstDim(1);
lb2 := MAKEMAT(0,1,n);
ub2 := MAKEMAT(0,1,n);
FOR i FROM 1 TO n DO
lb2(1,i) := lb(i);
ub2(i,1) := ub (i);
END;
x := MAKEMAT(0,1,n);
bestX := MAKEMAT(0,1,n);
bestFx := 10^99;
// you can change the next statement to set m
// to a different frsction of maxIt2
m := IP(maxIt1/2);
FOR iter FROM 1 TO maxIt1 DO
IF iter == m THEN
FOR i FROM 1 TO n DO
IF bestX(1,i) > 0 THEN
IF 0.85*bestX(1,i) >= lb(i) THEN
lb(i) := 0.85*bestX(1,i);
END;
IF 1.15*bestX(1,i) <= ub(i) THEN
ub(i) := 1.15*bestX(1,i);
END;
ELSE
IF bestX(1,i) < 0 THEN
IF 0.85*bestX(1,i) <= ub(i) THEn
ub(i) := 0.85*bestX(1,i);
END;
IF 1.15*bestX(1,i) >= lb(i) THEN
lb(i) := 1.15*bestX(1,i);
END;
END;
END;
END; // FOR i
END; // iF iter = m
FOR i FROM 1 TO n DO
x(1,i) := lb(i) + (ub(i)-lb(i))*RANDOM(0,1);
END;
f := FX(x,n);
IF f < bestFx THEN
bestFx := f;
FOR i FROM 1 TO n DO
bestX(1,i) := x(1,i);
END;
// GET THE REGION AROUND THE NEW BEST POINT
FOR i FROM 1 to n DO
IF bestX(1,i) > 0 THEN
IF 0.95*bestX(1,i) >= lb2(1,i) THEN
lb2(1,i) := 0.95*bestX(1,i);
END;
IF 1.05*bestX(1,i) <= ub2(1,i) THEN
ub2(1,i) := 1.05*bestX(1,i);
END;
ELSE
IF bestX(1,i) < 0 THEN
IF 0.05*bestX(1,i) <= ub2(1,i) THEn
ub2(1,i) := 0.85*bestX(1,i);
end;
IF 1.15*bestX(1,i) >= lb2(1,i) THEN
lb2(1,i) := 1.15*bestX(1,i);
END;
END;
END;
END;
// SEARCH NEAR THE NEW BEST POINT
FOR iter2 FROM 1 TO maxIt2 DO
FOR i FROM 1 TO n DO
x(1,i) := lb2(1,i)+(ub2(1,i)-lb2(1,i))*RANDOM();
END;
f := FX(x,n);
IF f < bestFx THEN
bestFx := f;
FOR i FROM 1 TO n DO
bestX(1,i) := x(1,i);
END;
END;
END; // FOR iter2
END; // IF f < bestFx
END; // FOR iter
RETURN bestX;
END;