Post Reply 
Optimization using random searches
10-26-2017, 09:59 PM
Post: #1
Optimization using random searches
Optimization using random search is an algorithm that is easy to code. This approach searches for the best values of the independent variables, each within a trusted range. The trick is to use a large number of iterations. The best solution tends to deviate from the true solutions as the number of variables increase and as the optimized function is more complex to evaluate.

There are a few approaches that I recently used to possibly enhance random searching:

1) If the independent variables can be grouped by categories (such as temperature, pressure, concentration, length, and so on), then you can add more sets of searches in the main iterations loop. Each set obtains random values for variable in a specific category, while setting the other variables to their best values obtained so far.

2) If you notice, while running tests on your problem, that the values of the independent variables oscillate above and below their correct optimum values, then you can add another set of calculations to the main loop. This additional set looks at the averages of the best values obtained so far for each independent variable. It then generates random number that are in ranges of, say, 20% or 30% around the averages. You have the option of including the values of independent variables that produce better minima in the running totals of the best independent variable values. This step will tend to favor the average best values of the independent variables. However, remember that each loop has the free hand to pick at random values for the independent variables, each within its prescribed range.

I should be soon posting HP-71B programs showing the general random search and also a version that uses the averages of the best values.

Namir
Find all posts by this user
Quote this message in a reply
10-27-2017, 12:59 AM
Post: #2
RE: Optimization using random searches
I decided to post HP Prime code instead. See the HP Prime code listing section.
Find all posts by this user
Quote this message in a reply
10-27-2017, 01:52 AM
Post: #3
RE: Optimization using random searches
One improvement I saw many years ago was to not only evaluate the objective function at a random point but also at a weighted average of the random point and the current best point. I.e. stepping from the current best in a random direction each time as well as at the random location.

Pauli
Find all posts by this user
Quote this message in a reply
10-29-2017, 03:26 AM
Post: #4
RE: Optimization using random searches
That sounds like it works. I have also tried to add a search inside ranges (for each variable) defined by the minimum and maximum best value found so far. Another approach would use the average of the three smallest and average of three largest best values so fat, to define an additional range for search.

Namir
Find all posts by this user
Quote this message in a reply
10-29-2017, 03:59 AM
Post: #5
RE: Optimization using random searches
I wouldn't restrict the range -- the extrema needn't be between the endpoints regardless of how you chose them. I suspect that if the function is sufficiently well behaved that this always works, you'd be better with a non-random algorithm.

Another thing I remember from the interpolation approach was to head beyond the current random points the same amount as the weighted sample. Sometimes this helped.


Pauli
Find all posts by this user
Quote this message in a reply
10-29-2017, 02:14 PM
Post: #6
RE: Optimization using random searches
I've done lots of these and there are a few tricks that really speed things up. I have a bunch of Fortran code somewhere (I keep all my old stuff if possible) that solved all the test codes on several web sites with test problems like Rosenbrock's Banana Function and the like.

A random search always converges in probability (but takes a long time). It's helpful to use quasi-random sampling for the main global sampling scheme. For the "probing" part (looking for promising regions), I use a Richtmeyer sequence, fractional parts of the multiples of square roots of primes. I just generate a bunch (100 or 1000 or so) of function values; then sort them according to which is closest to optimum (maximum or minimum). This gives me a list of starting values for a local improvement search.

Next for the top 10 (or 20 or whatever, maybe this is adjusted as the computation proceeds) I do a small local search. I usually use a grid search which (with dimensionality d) needs 2d at most probes to find a direction that goes downhill (uphill) if there is one. I also sample from the 80 or 90 bottom items and try a local search there. It's possible to do a local search with a short-step search with a random direction. (Using primes not used in the global search.) I spend no more that half of the total work on local search. The idea of sampling the poorer results is to look for near misses.

Then I toss in a few more global searches (say up to 20 or 200 or whatever) total stuff. Sort again and discard the lowest. Now I have a list of good candidate points from either a global (lucky) search or local probes around that search.

Repeat the whole process. New global points. Locally update a few. (I think I generate local update candidates by choosing with a linearly declining probability. The 10th candidate has 10% the probability of being chosen than the top candidate. I just use something that biases the choice towards good points.

If the search space is unbounded, I use random Gaussians (expensive but the only possible distribution with uniformly distributed directions in multiple dimensional cases) to chose candidate points. At first, I chose an arbitrary point for the origin; then later use several origins such as the average of the top 10 points or the top 200 or so. For constrained regions, I usually use a big cube around the region.

There are other local search (or even global ones) that can be combined with the random search. The Nelder-Mead Creeping Simplex isn't too bad.

If the objective function has some global structure, the sorting and local searches will force the program to move toward the optimum. A really random function will just get searched as though no local search was used. Constraints can be a problem; either a penalty function or some sort of move the point near the constraint procedure is needed.

The main things I added that are not in the literature are the sorting and discarding (an evolutionary procedure) and the idea of biased selection from all candidates to try to get a lucky hit.
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)