Date of Award

Summer 2019

Document Type

Thesis

Degree Name

Master of Science

Department

Mathematics

First Advisor

Dr. J Christopher Tweddle

Second Advisor

Dr. Dianna Galante

Third Advisor

Dr. Andrius Tamulis

Abstract

Optimization problem involves minimizing or maximizing some given quantity for certain constraints. Various real-life problems require the use of optimization techniques to find a suitable solution. These include both, minimizing or maximizing a function. The various approaches used in mathematics include methods like Linear Programming Problems (LPP), Genetic Programming, Particle Swarm Optimization, Differential Evolution Algorithms, and Gradient Descent. All these methods have some drawbacks and/or are not suitable for every scenario. Gradient Descent optimization can only be used for optimization when the goal is to find the minimum and the function at hand is differentiable and convex. The Gradient Descent algorithm is applicable only in the case stated above. This makes it an algorithm which specializes in that task, whereas the other algorithms are applicable in a much wider range of problems. A major application of the Gradient Descent algorithm is in minimizing the loss functions in machine learning and deep learning algorithms. In such cases, Gradient Descent helps to optimize very complex mathematical functions. However, the Gradient Descent algorithm has a lot of drawbacks. To overcome these drawbacks, several variants and improvements of the standard Gradient Descent algorithm have been employed which help to minimize the function at a faster rate and with more accuracy. In this paper, we will discuss some of these Gradient Descent based optimization algorithms.

Included in

Mathematics Commons

Share

COinS