This work focuses on the review and applications of extra gradient methods popularly referred to as optimistic descent algorithms. First we introduce the concept optimistic mirror descent (OMD) and provide intuition as well as motivation behind this concept. Second we evaluate the performance of optimistic methods across a host of applications. In particular we compare the performance of OMD in convex, non-convex and saddle point problem settings. Lastly we observe that OMD has certain fundamental limitations and only outperforms the existing methods under certain specific considerations.
This project was undertaken as part of Large Scale Optimization course during Jan-May 2019. Further details about the project and results are available here.