Post Reply 
Optimization using random searches
10-29-2017, 02:14 PM
Post: #6
RE: Optimization using random searches
I've done lots of these and there are a few tricks that really speed things up. I have a bunch of Fortran code somewhere (I keep all my old stuff if possible) that solved all the test codes on several web sites with test problems like Rosenbrock's Banana Function and the like.

A random search always converges in probability (but takes a long time). It's helpful to use quasi-random sampling for the main global sampling scheme. For the "probing" part (looking for promising regions), I use a Richtmeyer sequence, fractional parts of the multiples of square roots of primes. I just generate a bunch (100 or 1000 or so) of function values; then sort them according to which is closest to optimum (maximum or minimum). This gives me a list of starting values for a local improvement search.

Next for the top 10 (or 20 or whatever, maybe this is adjusted as the computation proceeds) I do a small local search. I usually use a grid search which (with dimensionality d) needs 2d at most probes to find a direction that goes downhill (uphill) if there is one. I also sample from the 80 or 90 bottom items and try a local search there. It's possible to do a local search with a short-step search with a random direction. (Using primes not used in the global search.) I spend no more that half of the total work on local search. The idea of sampling the poorer results is to look for near misses.

Then I toss in a few more global searches (say up to 20 or 200 or whatever) total stuff. Sort again and discard the lowest. Now I have a list of good candidate points from either a global (lucky) search or local probes around that search.

Repeat the whole process. New global points. Locally update a few. (I think I generate local update candidates by choosing with a linearly declining probability. The 10th candidate has 10% the probability of being chosen than the top candidate. I just use something that biases the choice towards good points.

If the search space is unbounded, I use random Gaussians (expensive but the only possible distribution with uniformly distributed directions in multiple dimensional cases) to chose candidate points. At first, I chose an arbitrary point for the origin; then later use several origins such as the average of the top 10 points or the top 200 or so. For constrained regions, I usually use a big cube around the region.

There are other local search (or even global ones) that can be combined with the random search. The Nelder-Mead Creeping Simplex isn't too bad.

If the objective function has some global structure, the sorting and local searches will force the program to move toward the optimum. A really random function will just get searched as though no local search was used. Constraints can be a problem; either a penalty function or some sort of move the point near the constraint procedure is needed.

The main things I added that are not in the literature are the sorting and discarding (an evolutionary procedure) and the idea of biased selection from all candidates to try to get a lucky hit.
Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
Optimization using random searches - Namir - 10-26-2017, 09:59 PM
RE: Optimization using random searches - ttw - 10-29-2017 02:14 PM



User(s) browsing this thread: 1 Guest(s)