Speaker
Description
Given a matrix $X$, and two ranks $r_1$ and $r_2$, the Hadamard decomposition (HD) consists in looking for two low-rank matrices, $X_1$ of rank $r_1$ and $X_2$ of rank $r_2$, both of the same size as $X$, such that $X\approx X_1\circ X_2$, where $\circ$ is the Hadamard (element-wise) product. HD is more expressive than standard low-rank approximations, such as the truncated singular value decomposition (TSVD), as it can represent higher-rank matrices with the same number of parameters; this is because the rank of $X_1 \circ X_2$ is generically equal to $r_1 r_2$. In this paper, we first present some new theoretical results for HD. These allow us to develop two new manifold-based gradient descent algorithms for computing HDs: the first one uses the representation $X\approx X_1\circ X_2$, while the second one uses $X\approx WH^\top$, where $W$ and $H$ have $r_1 r_2$ columns. We also propose new initializations that allow us to obtain better HDs. We compare our algorithms and initialization strategies with the TSVD and with the state of the art. Numerical results show that the new methods are efficient and competitive, both on synthetic and real data.