Parallel Diagonally Implicit Runge-Kutta Methods For Solving Ordinary Differential Equations
Din, Ummul Khair Salma (2009) Parallel Diagonally Implicit Runge-Kutta Methods For Solving Ordinary Differential Equations. PhD thesis, Universiti Putra Malaysia.
This thesis focuses on the derivations of diagonally implicit Runge-Kutta (DIRK) methods with the capability to be implemented by parallel executions. A few new methods are proposed by having sparsity patterns which enable the parallelization of methods. In the first part of the thesis, a fifth order DIRK suitable for two processors parallel executions and DIRK methods of fourth and fifth orders suitable for three processors are proposed. The executions of these methods are done by using fixed stepsizes on a set of nonstiff problems. The regions of stability are presented and numerical results of the methods are compared to the existing methods. Parallel computations show significant time reduction when solving large systems of nonstiff ordinary differential equations (ODEs). The subsequent part of the thesis discusses on embedded DIRK methods suitable for two processors implementations. Two 4(3) and also two 5(4) embedded DIRK methods with adequate stability regions to solve stiff ODEs are proposed. Numerical experiments on stiff test problems are done based on variable stepsize strategy. An existing code for solving stiff ODEs suitable for embedded DIRK with equal diagonal elements is modified to accommodate the new methods with alternate diagonal elements. Comparisons on numerical results to existing methods show a competitive efficiency when solving small systems of stiff ODEs. A parallel code is developed with the same capability of the modified sequential code to handle stiff ODEs, linear and nonlinear problems. All algorithms are written in C language and the parallel code is implemented on Sun Fire V1280 distributed memory system. Three large scales of stiff ODEs are used to measure the parallel performances of the new embedded methods. Results show that speedups increased as the dimensions of the problems gets larger which is a significant contribution in reducing the cost of computations.
Repository Staff Only: Edit item detail