Abstract
This paper develops a novel method-the global descent method-for solving a general class of global optimization problems. This method moves from one local minimizer of the objective function f to a better one at each iteration with the help of an auxiliary function termed the global descent function. The global descent function is not only guaranteed to have a local minimizer κ over the problem domain in Rn but also ensures that each of its local minimizers is located in some neighborhoods of a better minimizer of f with f(κ′) < f(κ?). These features of the global descent function enable a global descent to be achieved at each iteration using only local descent methods. Computational experiments conducted on several test roblems with up to 1000 variables demonstrate the applicability of the proposed method. Furthermore, numerical comparison experiments carried out with GAMS/BARON on several test problems also justify the efficiency and effectiveness of the proposed method.
Original language | English |
---|---|
Pages (from-to) | 3161-3184 |
Number of pages | 24 |
Journal | SIAM Journal on Optimization |
Volume | 20 |
Issue number | 6 |
DOIs | |
Publication status | Published - 2010 |
Externally published | Yes |
Keywords
- Global descent method
- Global optimization
- Mathematical programming
- Nonconvex optimization
- Nonlinear programming