# Solving a Non-Convex problem by solving many Convex problems

Recall the Babylonian method for computing square roots, which you might have learnt in highschool. I propose a higher dimensional variant of that, which I use to solve a few Non-Convex problems, specifically computing Matrix Square root, Positive Semidefinite matrix completion and Euclidean Distance Matrix completion.

## Babylonian method

It is an iterative procedure for computing the square root of a given real number. Say we need to compute the square root of a . The first iterate can be initialized to a random positive real number. The iterations proceed as:

As , at a quadratic rate.

Here is my interpretation of this procedure. Define the function . Let . Under this interpretation, the iterations proceed as:

To see why this is true, note that is minimized at . So we get back the original Babylonian method.

## Matrix Square Root

Let be an unknown PSD matrix of rank . Let . The goal here is to find given . Trying to minimize the following objective function would not work as it is non-convex.

Given , we can determine its rank . We initialize our initial estimate to be a random PSD matrix of rank . This can be obtained by generating a random matrix and letting be its gram matrix. Since matrix multiplication is not commutative, defining the function as would not work.

We define it as , so that . The iterations proceed as:

The subproblem can be solved easily as is convex in . Any method for solving convex problems would give us the optimal

This heuristic has worked pretty well in practise. My claim is that as , at a quadratic rate.

## Positive Semidefinite Matrix completion

Let be an unknown . Let . Assume is incoherent. Entries of are sampled according to some probability (which depends on the incoherence parameter and a few other parameters). The goal here is to find given only the sampled entries of . Let be the binary sampling matrix. Trying to minimize the following objective function would not work as it is non-convex.

We initialize our initial estimate to be a random matrix. We define , so that . The iterations proceed as:

Similar to the previous case, the subproblem is convex in and can be solved efficiently. This heuristic has also worked pretty well in practise. Similarly, my claim is that as , at a quadratic rate.

## Euclidean Distance Matrix completion

Let be the EDM produced by an unknown matrix . The entries of are the squared Euclidean distances between the rows of . Let be the following function on square matrices

Here is the diagonal matrix of and is the matrix of all s.

Given , the EDM can be computed using the following equation.

Similar to the PSD completion problem, entries of are sampled according to some probability . The goal is to find given only the sampled entries of . Let be the binary sampling matrix. Trying to minimize the following objective function would not work as it is non-convex.

Because of the way we defined ,

We initialize our initial estimate to be a random matrix. The iterations proceed as:

Again, subproblem is convex in and can be solved efficiently. This heuristic has also worked pretty well in practise. Again, my claim is that as , at a quadratic rate.

I have run several thousands of experiments for all the three problems, and this method has always worked. Why does this method work? Does this method work for other non convex problems? For what type of non-convex problems does this method work? All these are **open problems** and I would love to have some feedback.

Prateek Jain et.al. give a gradient descent method for computing matrix square root : Computing Matrix Squareroot via Non Convex Local Search.

It has been proved that PSD matrix completion has no spurious local minima. See these papers from NIPS 2016 and ICML 2017. So gradient descent methods would also work for this problem.

I strongly believe that EDM completion has no spurious local minima. See my previous blog post about this. In practise, gradient descent has worked for EDM completion.

Compared to gradient descent methods, my algorithm converges in fewer iterations, but each iteration takes more time. Gradient descent takes more iterations to converge, but each iteration is fast. When is very large, gradient descent wins as solving the convex subproblem at each iteration becomes very expensive.