The workshop is organized within the scope of a French-Czech project funded by CNRS, also involving Roxana Hess, Martin Kružík, Pierre Maréchal, Jean Bernard Lasserre and Tillmann Weisser.
Tuesday, October 11, 2016
The talk will present generalizations of semidefinite programming formulations of 1-norm optimization problems over infinite dictionaries of vectors of complex exponentials, which were recently proposed for superresolution, 'gridless' compressed sensing, and other applications in signal processing. Results related to the generalized Kalman-Yakubovich-Popov lemma in linear system theory provide simple, constructive proofs of the semidefinite representations of the penalty functions used in these applications. The connection leads to several extensions to gauge functions and atomic norms for sets of vectors and matrices parameterized via the nullspace of matrix pencils. The results will be illustrated with examples from spectral estimation and array processing. (Joint work with Hsiao-Han Chao.)
We will present two approaches to the decomposition of a large matrix inequality into several smaller ones with the goal to efficiently use existing SDP solvers. The approaches will be demonstrated on an SDP problem arising in topology optimization of mechanical structures. The first, well-known, one is based on the decomposition of chordal graphs. The second one uses the structure of the underlying PDE and its discretization. We will show that the second approach can be considered a sparse low-rank version of the first one and that it leads to significantly more efficient solution.
Wednesday, October 12, 2016
The Fourier-based homogenization solvers were introduced by Moulinec and Suquet into the field of computational micromechanics of materials in 1994. Since then, they have established themselves as a competitive alternative to finite elements in terms of accuracy, efficiency, versatility, and simplicity of implementation. In its basic version, the method works as a fixed-point iterative solution to a periodic Lippman-Schwinger integral equation, whose kernel can be efficiently handled by the Fast Fourier Transform (FFT).
Recently, we have interpreted and analyzed FFT-based methods in a Galerkin framework that involves the four standard steps: (i) introducing a weak form of the governing equations, (ii) projecting the weak form to an approximation space of trigonometric polynomials, (iii) applying a numerical quadrature, (iv) solving the ensuing system of linear equations by a suitable iterative solver.
In particular, the original Moulinec-Suquet scheme is recovered when (i) the weak form involves the gradients gradients of the field variables, (ii) the approximation space is spanned by trigonometric polynomials, (iii) the trapezoidal rule is employed for numerical integration, and step (iv) is performed by the Richardson iteration.
The purpose of this talk is twofold: to summarize these developments and to explain how they can be used to develop more efficient FFT-based solvers, considering scalar elliptic problems for simplicity. Specifically, I will focus on a-posteriori error estimation based on duality arguments, related to the steps (i)--(iii) above, and on the comparison of iterative solvers, i.e. step (iv).
This is a joint work with Jaroslav Vondřejc (Technische Universität Braunschweig), Ivo Marek (Czech Technical University in Prague), and Nachiketa Mishra (Tata Institute for Fundamental Research).
This paper investigates output synchronization of heterogeneous linear time-invariant systems. Agents distributively communicate measured outputs and synchronize on regulated outputs. Necessary geometrical structure of single-agents' drift dynamics is used. Geometrical relations between single-agent dynamics, measured outputs and regulated outputs are investigated. Total system’s cooperative stability conditions reduce to requirements depending separately on single-agents’ structure and interconnecting graph topology, allowing for a distributed control design. Sufficient condition is given based on structurally motivated coordinate transformations which clearly reveal the effects of distributed control on single-agents. In this case it is shown that identical subsystem state synchronization and robustness to interconnections guarantee cooperative stability and regulated output synchronization.
Cooling device manufacturers are constantly striving for improved system performance. Driven by increasingly stringent conditions for device rating and certification, the industry has seen significant process development over the last years, from individual component design to overall vapour compression cycle design. On the other hand, stagnant control technology development in the field has meant that the potential of such process developments for overall process efficiency improvement have not been fully utilized. This work presents recent efforts to bridge this aforementioned technology gap. A nonlinear setpoint optimization problem is formulated in order to exploit the thermodynamic degrees of freedom. The solution is approximated using multivariate polynomials on sets of active constraints. An inferential sensor is constructed to estimate system efficiency degradation due to frost formation. The inferential sensor utilizes a formulation of an extended Kalman Filter for systems with uncertain parameters. The nonlinear dynamics of the system are approximated as an LPV. Motivated predominantly by limited platform resources, a gain-scheduled MPC formulation, utilizing the range control concept, is presented. The primary goal of this solution is maximization of year round operating efficiency (referred to as seasonal coefficient of performance or SCOP).
Polynomial optimal control problems (P-OCPs) can be approached by discretization methods leading to large scale polynomial optimization problems (POPs). POPs have been attacked by hierarchies of linear and semidefinite programs. As the size of these programs grows fast increasing the relaxation order - in particular in the case when the number of variables is large - this theory had not been useful for P-OCPs until Waki et al. (2006) proposed a sparse version of the semidefinite hierarchy. For the case that the POP has some sparsity structure, typically introduced by the discretization of P-OCPs, convergence of the sparse semidefinite hierarchy to the POP's optimal value has been proved by Lasserre (2006) and Kojima et al. (2009). Building on these results, we present a convergence theorem for a sparse linear hierarchy. From this we deduce a hybrid hierarchy, which converges better than the linear hierarchy, while the semidefinite cost remains constant for all relaxation orders (in contrast to the semidefinite hierarchy).
Thursday, October 13, 2016
The use of mollifiers for the regularization of linear inverse problems finds its roots in the late 80's and early 90's. Two approaches have developed independently: the well known approximate inverses on the one hand, based on duality in Hilbert spaces, and Fourier synthesis on the other hand, which belongs to variational methods. The second approach, which was originally designed for problems of deconvolution and of aperture synthesis in astronomy, has been extended later on to various inverse problems, especially in tomography.
Both approaches have in common that, prior to any technical choice, a target object is clearly defined in terms of the unknown true object: the initial ill-posed problem is replaced by that of recovering a smoothed version of the unknown object, smoothness being expressed in terms of convolution. Both of them admit the possibility to perform reconstructions with variable convolution kernels.
In order to obtain a general construction for the variational approach to mollification, it is necessary to find solution for an intertwining relationship between operators. Such solutions are rather immediate in the case of deconvolution problems or problems that involve the Fourier transformation (deconvolution, inversion of the Fourier truncated operator, inversion of the Radon transformation).
In the general case where the operator to be inverted shows no remarkable interplay with the Fourier transformation, a little more work is necessary to obtain intertwining operators. We need to work out their definition, existence and application. In the favorable cases, the intertwining operator may be applied via the unbounded pseudoinverse of the model operator. Applying this unbounded operator may be performed in a stable manner by using the well-known proximal point operator.
Our aim in this paper is twofold: we wish to review in a clear and concise manner the mollification approaches to linear inverse ill-posed problems mentioned above; and we propose a general construction for the variational approach which, in addition to extending the realm of applicability, will also offer a lot of flexibility in the choice of the target object. We mainly focus here on the construction itself, which relies on technical results obtained elsewhere.
The overall purpose of this research is to provide new tools for the analysis of structural econometric models. Our focus will be on nonparametric procedures that permit estimation of causal effects while avoiding the strong assumptions required by parametric procedures or deconvolution models. In the first case, our research is motivated by the analysis of structural causal models with endogeneity. Endogeneity may be due to omitted variables, measurement errors, or simultaneity. Nonparametric regression with endogeneity gives rise to an ill-posed inverse problem. In the second case, our objective is to reconstruct the density function of a random variable from the observation of noisy data. Our contribution is to solve both problems using a mollification approach. Mollification has been investigated for linear ill-posed equations, specifically for deconvolution in signal and image processing, by Lannes et al. (1987), and has been recently extended by Bonnefond and Maréchal (2009) and Maréchal (2016) to more general situations. We investigate the use of this technique in the econometric context.
Tuesday, October 18, 2016
Eigenvectors of square matrices are central to linear algebra. Eigenvectors of tensors are a natural generalization. The spectral theory of tensors was pioneered by Lim and Qi a decade ago, and it has found numerous applications. We discuss the use of orthogonal tensor decompositions in data analysis, and we present work with Abo and Seigal aimed at characterizing which configurations of vectors arise as the eigenvectors of some tensor. This lecture also serves an invitation to applied algebraic geometry. See https://math.berkeley.edu/~bernd/noticesarticle.pdf
We determine the algebraic degree of minimal problems for the calibrated trifocal variety in computer vision. We rely on numerical algebraic geometry and the homotopy continuation software Bertini.
Given a univariate polynomial, its abscissa is the maximum real part of its roots. The abscissa arises naturally when controlling linear differential equations. As a function of the polynomial coefficients, the abscissa is Hölder continuous, and not locally Lipschitz in general, which is a source of numerical difficulties for designing and optimizing control laws. In this paper we propose simple approximations of the abscissa given by polynomials of fixed degree, and hence controlled complexity. Our approximations are computed by a hierarchy of finite-dimensional convex semidefinite programming problems. When their degree tends to infinity, the polynomial approximations converge in norm to the abcissa, either from above or from below. Joint work with Didier Henrion, Jean Bernard Lasserre and Tien Son Pham, see arXiv:1507.08463.
Tentative programme:
About the musicians: