# Stochastic Optimization Method for Analytic Continuation

SOM is a TRIQS-based implementation of the Stochastic Optimization Method for Analytic Continuation [MPSS2000], which solves a family of Fredholm integral equations of the first kind. Numerical solution of such equations is required to reconstruct a real-frequency spectral representation of physical observables (Green’s functions, dynamical susceptibilities) from noisy results of Quantum Monte Carlo simulations.

Said integral equations have the following general form,

where \(G(n)\) is an input quantity, known with some uncertainty and
\(A(\epsilon)\) is the spectral function to be found. Meaning of the index
\(n\), limits of the integral, and the form of the kernel
\(K(n, \epsilon)\) depend on the problem at hand.
\(A(\epsilon)\) is assumed to be non-negative and normalized to a known
constant \(\mathcal{N}\). A full list of supported observables with
respective integral kernels is given here.
All equations of the considered family share a common property of being
*ill-posed problems*: Their solutions are not unique and tiny variations of the
right hand part may result in huge changes of the solution.

The Stochastic Optimization Method uses Markov chains combined with other optimization techniques to minimize one of the following objective functions (“goodness of fit” functionals),

where \(\mathbf{\Delta} = \{\Delta(n)\}\) are discrepancies

These two \(\chi^2\)-functionals correspond to the two cases when either the estimated error bars \(\sigma(n)\) or the \(N\times N\) covariance matrix \(\hat\Sigma\) of the input data \(G(n)\) are known.

An important feature of the algorithm is its “continuous-energy” representation of solutions. Function \(A(\epsilon)\) is parameterized as a sum of rectangles with certain positions, widths and heights.

SOM release 2.0 brings a number of new features, most notably a complete implementation of the Stochastic Optimization with Consistent Constraints (SOCC) protocol proposed in [GMPPS2017].

Note

This code uses MPI parallelism and is considerably more CPU-intensive than algorithms based on the Bayesian statistical inference (MaxEnt) or Padé approximants. Its primary design goal is the ability to resolve fine spectral features, rather than giving a quick insight into general shape of the spectrum.