TensorTalk: Tensor network complexity of multilinear maps

Speaker: Radu Curticapean

Title: Tensor network complexity of multilinear maps

Abstract: Computational complexity theory seeks tight upper and lower bounds on the complexity of algorithmic problems in different models of computation. Well-studied models include Turing machines, random-access machines, and Boolean circuits. While these models are general enough to capture most algorithmic techniques, they are too general for us to prove strong lower bounds. The situation is similar in models of arithmetic computation, such as arithmetic circuits.

In an ITCS 2019 paper, Austrin, Kaski, and Kubjas showed how tensor networks (given together with a sequence of tensor contractions) give rise to a computational model for multilinear maps that is "sandwiched" between low-rank tensor decompositions and fully general arithmetic circuits:

- There are multilinear maps corresponding to natural graph-theoretic problems that can be evaluated more efficiently via tensor networks than via low-rank tensor decompositions.

- Nevertheless, tensor networks are a sufficiently restricted computational model to allow for nontrivial lower bounds, e.g., an exponential lower bound for the permanent of n x n matrices.

In this talk, I will present the key ideas in the paper by Austrin, Kaski, and Kubjas. The relevant background from arithmetic circuit complexity will be included.