The workshop is organized in the framework of the Necas Center for Mathematical Modeling, a partner institution of eu-maths-in.cz.
Given a set of point correspondences in two planes, the existence of a fundamental matrix is a necessary condition for the points to be the images of a 3-dimensional scene imaged with two pinhole cameras. If the camera calibration is known then one requires the existence of an essential matrix. I will describe an efficient algorithm, using exact linear algebra, for testing the existence of a fundamental matrix. The input is any number of point correspondences. For essential matrices, we characterize the solvability of the defining polynomials using methods from algebraic geometry. The conditions we derive are polynomials stated purely in terms of image coordinates, and they represent a new class of two-view invariants.
Joint work with Sameer Agarwal, Hon-leung Lee and Bernd Sturmfels
Consider a symmetric matrix, the entries of which depend linearly on some parameters. The domains of the parameters are compact real intervals.
First, we investigate the problem of checking whether for each (or some) setting of the parameters, the matrix is positive definite (or positive semidefinite). We state a characterization in the form of equivalent conditions, and also propose some computationally cheap sufficient / necessary conditions. Our results extend the classical results on positive (semi-)definiteness of interval matrices. They may be useful for checking convexity or non-convexity in global optimization methods based on branch and bound framework and using interval techniques.
In the second part of the presentation, we review the classical alpha-BB method. It is a deterministic global optimization method the important step of which is to determine a convex underestimator of an objective function on an interval domain. Its particular point is to enclose the range of a Hessian matrix in an interval matrix. To have a tighter estimation of the Hessian matrices, we utilize the aforementioned linear parametric form enclosure. One way to obtain this form is by using a slope extension of the Hessian entries. Numerical examples indicate that our approach can sometimes significantly reduce overestimation on the objective function. However, the slope extensions highly depend on a choice of the center of linearization; so far, no winner found.
Incremental condition estimation for lower triangular matrices computes a sequence of approximate condition numbers of the leading upper left submatrices of growing dimension. The approximation for the current submatrix is obtained from an approximate singular vector constructed without accessing the previous submatrices, which makes the procedure relatively inexpensive and particularly suited when a triangular matrix is computed one row at a time.
In this talk we show that when the inverse of the triangular matrix is computed along with the triangular matrix itself, then the accuracy of the estimators can be significantly improved by focussing on right singular vectors. This is joint work with Miroslav Tuma.
Polynomial systems arise in many areas of engineering sciences. Most of the time, the end-user needs information about the real solution set of the system under study such as deciding its emptiness and/or compute sample solutions, decide connectivity queries, describe the projection of the solution set (quantifier elimination). Moreover, the algebraic nature of these problems make relevant the design of exact algorithms to solve them. Also, complexity issues are crucial since these problems are singly exponential in the number of variables.
In this talk, I will first review some techniques that allow to obtain algorithms with a worst case complexity which is essentially cubic in the output size and show how they can be put into practice. Next, I will describe some recent trends of the topics which consist in exploiting the structure of the input system to obtain faster algorithms.
The second part of the talk is joint work with Simone Naldi and Didier Henrion.
We shall show that higher-order Sobolev type embeddings which are optimal in the fairly broad class of function spaces can be obtained via isoperimetric inequalities. Sobolev-type inequalities of any order involving arbitrary rearrangement-invariant norms on open sets in R^n, possibly endowed with a measure density, will be reduced to considerably simpler one-dimensional inequalities for suitable integral operators depending on the isoperimetric function of the relevant sets. As an immediate consequence, the optimal target space in relevant Sobolev embeddings can be determined for any-order Sobolev embeddings on regular domains of the Euclidean space, on irregular Euclidean domains described in terms of their isoperimetric function, and on families of product probability spaces, of which the Gauss space is a classical instance. An appropriate modification of conditions leads to the compactness of the corresponding Sobolev embeddings.
Polynomial optimization is concerned with minimizing or maximizing a polynomial objective function subject to a finite number of polynomial constraints. The constraints are non-strict real polynomial inequalities in several variables. A very successful technique to solve these problems has emerged since the turn of the millennium: The basic idea is to relax the problem by replacing all nonlinear monomials by new variables. As a matter of course, the resulting linear program would in general carry little information about the original problem. To delimit the loss of information during the linearization procedure, one adds infinite families of polynomial inequalities which are redundant in the original polynomial optimization problem but carry important information after being linearized. Each such family is chosen in a way such that it becomes a single linear matrix inequality when linearized. The linearized problem is no longer a linear program but still a semidefinite program and can therefore be solved efficiently. But what exactly do the optimal solutions of the original problem have to do with the original problem? This question has to do with the moment problem, with quadrature rules and sums of squares representations of positive polynomials.
Ab-initio electronic structure calculations that allow understanding and predicting material properties are a very complex subject involving a number of intricate sub-problems, even in case of a particular approach in terms of techniques and tools used, among the diversity available.
In the contribution we describe the building blocks of our computer implementation of an ab-initio real-space code for the following choice of techniques: (i) density functional theory, (ii) finite element method (or isogeometric analysis) and (iii) environment-reflecting pseudopotentials. The building block include: optimization algorithms, non-linear solvers, generalized eigenvalue problem solvers, finite element and isogeometric analysis libraries, data on atom properties, etc. The code is based on the open source finite element package SfePy (http://sfepy.org) and is primarily written using Python, a very high level interpreted programming language. We also discuss the relevance of Python both for gluing the individual building blocks together, and for fast implementation of some calculations.
Joint work with M. Novak, R. Kolman, M. Tuma, J. Vackar.
Many real-world problems are described by nonlinear partial differential equations. A promiment example of such equations is nonlinear (quasilinear) elliptic system with given right hand side in divergence form div f data. In case data are good enough (i.e., belong to L^2), one can solve such a problem by using the monotone operator therory, however in case data are worse no existence theory was available except the case when the operator is linear, e.g. the Laplace operator. For this particular case one can however establish the existence of a solution whose gradient belongs to L^q whenever f belongs to L^q as well. From this point of view it would be nice to have such a theory also for general operators. However, it cannot be the case as indicated by many counterexamples. Nevertheless, we show that such a theory can be built for operators having asymptotically the Uhlenbeck structrure, which is a natural class of operators in the theory of PDE.
Bayesian networks and their decision theoretic extensions - influence diagrams belong to the family of probabilistic graphical models. They were applied to diverse problems requiring decision-making under uncertainty. In this talk we will introduce an influence diagram for the optimization of a vehicle speed profile. We will present results of computational experiments in which an influence diagram was used to optimize the speed profile of a Formula 1 race car at the Silverstone F1 circuit.
Joint work with Vasek Kratochvil.
We present the use of Lasso in data analysis in material science and bioinformatics. The application in material science deals with classification of crystal types between zinc blends and rock salts. In the second case, we are interested in marker identification of tumor diseases. Furthermore we present the basic concepts of Compressed Sensing, which allows for deeper understanding of the numerical results.
Last updated on 27 February 2015.