Towards large scale unconstrained optimizationAbu Hassan, Malik (2007) Towards large scale unconstrained optimization.
AbstractA large scale unconstrained optimization problem can be formulated when the dimension n is large. The notion of 'large scale' is machine dependent and hence it could be difficult to state a priori when a problem is of large size. However, today an unconstrained problem with 400 or more variables is usually considered a large scale problem. The main difficulty in dealing with large scale problems is the fact that effective algorithms for small scale problems do not necessarily translate into efficient algorithms when applied to solve large scale problems. Therefore in dealing with large scale unconstrained problems with a large number of variables, modifications must be made to the standard implementation of the many existing algorithms for the small scale case. One of the most effective Newtontype methods for solving largescale problems is the truncated Newton method. This method computes a Newtontype direction by truncating the conjugate Gradient method iterates (inner iterations) whenever a required accuracy is nobtained, thereby the superlinear convergence is guaranteed. Another effective approach to largescale unconstrained is the limited memory BFGS method. This method satisfies the requirement to solve largescale problems because the storage of matrices is avoided by storing a number of vector pairs. The symmetric rank one (SR1) update is of the simplest quasiNewton updates for solving largescale problems. However a basic disadvantage is that the SR1 update may not preserve the positive definiteness with a positive definiteness approximation. A simple restart procedure for the SR1 method using the standard line search to avoid the loss of positive definiteness will be implemented. The matrixstorage free BFGS (MFBFGS) method is a method that combines with a restarting strategy to the BFGS method. We also attempt to construct a new matrixstorage free which uses the SR1 update (MFSR1). The MFSR1 method is more superior than the MFBFGS method in some problems. However for other problems the MFBFGS method is more competitive because of its rapid convergence. The matrix storage methods can be gread accelerated by means of a simple scaling. Therefore, by a simple scaling on SR1 and BFGS methods, we can improve the methods tremendously.
Repository Staff Only: Edit item detail
