Olivier attended the VISAPP conference where he presented results on Reranking with Contextual dissimilarity measures from representational Bregman k-means Here is the abstract followed by a short report of the conference. We present a novel reranking framework for Content Based
Jensen-Bregman Voronoi diagrams and centroidal tessellations
The Jensen-Bregman divergence is a distortion measure defined by the Jensen difference provided by a strictly convex function. Jensen-Bregman divergences extend the well-known Jensen-Shannon divergence by allowing to choose an arbitrary convex function generator instead of the standard Shannon entropy.
The Burbea-Rao and Bhattacharyya centroids
We study the centroid with respect to the class of information-theoretic distortion measures called Burbea-Rao divergences. Burbea-Rao divergences generalize the Jensen-Shannon divergence by measuring the non-negative Jensen difference induced by a strictly convex and differentiable function expressing a measure of
Computable reals
Well, the title of the blog is computational information geometry. It includes the word “computational”. Of course, one aspects is to bring computational geometry vistas to information geometry by fostering algorithmic techniques. Another aspect, is to ponder what can be
Unusual exponential families
I recently read articles on paleomagnetism. It is common to make the assumption of antipodal symmetry for the distribution of the dispersion of the directions. Two such spherical distributions (directional statistics) with such an antipodal symmetry are the Bingham and
French computational geometry days (JGA) 2010
Olivier was kind enough to provide this report trip on the French computational geometry days, as known as JGA 2010. The plenary speaker slides are online and worth a check. The Journées de Géométrie Algorithmique are the main French-speaking meeting about computational
Log-euclidean matrix vector space
Tensors are square symmetric positive-definite matrix. They are surprisingly in 1-to-1 mapping with symmetric matrices through matrix exponentiation (exp.log computed on the diagonal elements of the spectral decomposition). I said surprisingly because tensors are symmetric and therefore a proper subset
Additive versus non-additive property of entropy
Shannon entropy is said additive in the sense that the entropy of the joint distribution H(X*Y) is the sum of the entropies: H(X*Y) = H(X)+H(Y). This property is not true for the quadratic entropy (sum of squares). The Java program
Taxonomy of principal distances
How do we visualize relationships in the jungle of (statistical) distances? I tried to give insights at a glance with this poster.
Convex Hull Peeling
A long time ago, well in 1996, I investigated output-sensitive algorithms. I then designed an algorithm for peeling iteratively the convex hulls of a 2D point set. This yields the notion of depth of a point set, and is a
The geometric median
The center of mass (=centroid) is defined as the center point minimizing the squared of the Euclidean distances (=variance). If one of the source point is an outlier corrupting your dataset, and if that outlier goes to infinity, then your
Natural Exponential Families QVF
They are only 6 exponential family distributions that admit variance as a quadratic function (QVF=quadratic variance function) of the parameter. For the multivariate case, it is a bit more complex but well defined and studied: The $2d+4$ simple quadratic natural