Basic description of the method

Integral equation of the analytic continuation problem

The analytic continuation problem amounts to solving a Fredholm integral equation of the first kind with an approximately known right hand side \(G(n)\),

\[\int_{\epsilon_\mathrm{min}}^{\epsilon_\mathrm{max}} K(n, \epsilon) A(\epsilon) d\epsilon = G(n).\]

The integral kernel \(K(n, \epsilon)\) and the integration interval \([\epsilon_\mathrm{min}; \epsilon_\mathrm{max}]\) depend on what kind of physical observable \(G(n)\) is (Green’s function, susceptibility, conductivity). The formal discrete index \(n = \overline{1, N}\) may denote one of the following variables,

The integral equation is solved with respect to a non-negative spectral function \(A(\epsilon)\) subject to a normalization condition

\[\int_{\epsilon_\mathrm{min}}^{\epsilon_\mathrm{max}} A(\epsilon) d\epsilon = \mathcal{N}.\]

Goodness of fit functionals

The right hand side \(G(n)\) is typically obtained via statistical (Monte Carlo) sampling and averaging of a random quantity,

\[G(n) = \frac{1}{S}\sum_{s=1}^S G^{(s)}(n).\]

In addition to this average, SOM requires knowledge of either estimated error bars

\[\sigma(n) = \sqrt{\frac{1}{S(S-1)} \sum_{s=1}^S |G^{(s)}(n) - G(n)|^2}\]

or of a full covariance matrix

\[\Sigma_{nn'} = \frac{1}{S(S-1)} \sum_{s=1}^S (G^{(s)}(n) - G(n))^* (G^{(s)}(n') - G(n')).\]

When only the error bars are known, SOM tries to minimize the “goodness of fit” functional

\[\chi^2[A(\epsilon)] = \frac{1}{N} \sum_{n=1}^N \left|\frac{\Delta(n)}{\sigma(n)}\right|^2,\]

where \(\Delta(n)\) are discrepancies

\[\Delta(n) = \int_{\epsilon_\mathrm{min}}^{\epsilon_\mathrm{max}} K(n, \epsilon) A(\epsilon) d\epsilon - G(n).\]

With the covariance matrix available, a more general functional can be used instead,

\[\chi^2[A(\epsilon)] = \frac{1}{N} \mathbf{\Delta}^\dagger \hat\Sigma^{-1} \mathbf{\Delta}, \quad \mathbf{\Delta} = \{\Delta(n)\}.\]

Due to statistical noise, some of \(\hat\Sigma\)’s eigenvalues may turn to be negative, which would make \(\chi^2\) lack a global minimum. Furthermore, very small positive eigenvalues are still problematic as they can lead to numerical instability of the minimization procedure. SOM tackles these problems by discarding all negative eigenvalues and shifting the positive ones by a constant \(l^2\) (\(l\) is referred to as “filtering level”). This results in a modified \(\chi^2\)-functional

\[\chi^2[A(\epsilon)] = \frac{1}{\tilde N} \mathbf{\Delta}^\dagger \hat{\mathfrak{S}}^{-1} \mathbf{\Delta},\]

where \(\tilde N\) is the number of the retained positive eigenvalues \(\sigma^2(n)\) and \(\hat{\mathfrak{S}}^{-1}\) is a regularized version of \(\hat\Sigma^{-1}\),

\[\hat{\mathfrak{S}}^{-1} = \hat U \mathrm{diag} \left(\frac{1}{\sigma^2(n) + l^2}\right) \hat U^\dagger.\]

Here, \(\hat U\) is a \(N{\times}\tilde N\) matrix, whose columns are eigenvectors of \(\hat{\Sigma}\) corresponding to the retained eigenvalues.

Minimization of the \(\chi^2\)-functionals

The \(\chi^2\)-minimization proceeds in two steps.

At first, a stochastic minimization algorithm is used to accumulate \(J\) particular solutions corresponding to local minima of \(\chi^2\). Each solution is parameterized as a superposition of rectangles with positive heights,

\[ \begin{align}\begin{aligned}A(\epsilon) = \sum_{k=1}^K R_{\{c_k, w_k, h_k\}}(\epsilon),\\R_{\{c, w, h\}}(\epsilon) \equiv h \theta(w/2-|\epsilon-c|).\end{aligned}\end{align} \]

Accumulation of each solution is performed using a Markov chain consisting of \(F\) global updates. In turn, each global update comprises \(T\) elementary updates that continuously change center positions \(c_k\), widths \(w_k\), heights \(h_k\) as well the total number of rectangles \(K\). Further acceleration of the optimization procedure can be achieved by enabling the large scale Consistent Constraints updates.

At the second step, \(\tilde J\) out of the \(J\) particular solutions are classified as good based on their \(\chi^2[A_j(\epsilon)]\): A good solution must satisfy \(\chi[A_j] \leq \chi_c^\mathrm{abs}\) and \(\chi[A_j] \leq \min_{j'}(\chi[A_{j'}]) \chi_c^\mathrm{rel}\).

The good solutions are then combined to form a final solution,

\[A(\epsilon) = \sum_{j=1}^{\tilde J} c_j A_j(\epsilon), \quad \sum_{j=1}^{\tilde J} c_j = 1\]

In the traditional formulation of SOM [MPSS2000], the particular solutions are summed up with equal weights, \(c_j = 1/\tilde J\). It is also possible to employ a more sophisticated and customizable iterative procedure proposed in [GMPPS2017]. It yields a set of coefficients \(c_j\) that make for smoother spectra, and can optionally favor solutions close to a user-defined default model.

Instead of computing the final solution, one can use the statistical analysis technique from Sections I-II of [GMPPS2017] to estimate averages, dispersions and 2-point correlations of the particular solution ensemble.

Post-processing of spectral functions

There are a few operations one can perform with a computed final spectral function \(A(\epsilon)\).

  • Extract and inspect individual rectangles of the final solutions in their sum-of-rectangles form.

  • Recover the real-frequency version of the studied observable by projecting it onto a real frequency mesh. For instance, the retarded Green’s function \(G^\mathcal{ret}(\epsilon)\) can be recovered from a fermionic Matsubara Green’s function \(G(\tau)\).

  • Compute the high frequency expansion coefficients of the real-frequency observable.

  • Back-substitute \(A(\epsilon)\) into the original integral equation and reconstruct the rights hand side (not necessarily on the same \(n\)-mesh). This feature is useful for a quick assessment of quality of the solution.