Considering an arbitrary continuous function $\phi(\mathbf{r})$ on the cube $\mathbf{r} = (x,y,z) \in [0, b]^3$, we approximate $ x $ via a zero-one tuple $(x_1x_2x_3...x_n)\in \{0,1\}^n$ using the following formula:
$$ \begin{align*} x \approx b( \frac{x_1}{2^1} + \frac{x_2}{2^2} + \dots + \frac{x_n}{2^n}). \end{align*} $$Doing the same thing for $y$ and $z$, we can approximately write $\phi(\mathbf{r})$ down in an “enumerate” form as tensor
$$ \begin{align*} \phi(\mathbf{r})\approx\Phi_{x_1x_2...x_ny_1y_2...y_nz_1z_2...z_n}=:\Phi_{\mathbf{r}}, \end{align*} $$
where $\Phi_{\mathbf{r}}$ is a large tensor of $(2^n)^3$ elements. In order to represent the function $\phi(\mathbf{r})$ more accurately, it is necessary to have a fine grid, that is, a sufficiently large $n$. However, an exponentially increasing space complexity is prohibitively high. So, how to reduce the space complexity as much as possible while maintaining a competitive accuracy?
For any tensor can be unfolded as a tensor train, more general, tensor network, as Fig. 1.
Fig. 1. Tensor train.
where $\cdot$ is tensor contraction and the space complexity is reduced to $(2\sum_{i=1}^{3n-1}D_{i} D_{i+1})\sim\mathcal{O}(n)$ with $D_{1}=D_{3n}=1$
Citation
Cited as:
https://zhaotong94.github.io/blog/ttfr/.
Or
@article{zhao2024tensor,
title = "Tensor Train for Function Representation",
author = "Zhao, Tong",
journal = "zhaotong94.github.io",
year = "2024",
month = "Nov",
url = "https://zhaotong94.github.io/blog/ttfr/"}
...