Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Covariances in computer vision and machine learning /

By: Hà, Quang Minh 1977-, [author.].
Contributor(s): Murino, Vittorio [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on computer vision: # 13.Publisher: [San Rafael, California] : Morgan & Claypool, 2018.Description: 1 PDF (xiii, 156 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781681730141.Subject(s): Computer vision -- Mathematical models | Machine learning -- Mathematical models | covariance descriptors in computer vision | positive definite matrices | infinite-dimensional covariance operators | positive definite operators | Hilbert-Schmidt operators | Riemannian manifolds | affine-invariant Riemannian distance | LogEuclidean distance | Log-Hilbert-Schmidt distance | convex cone | Bregman divergences | kernel methods on Riemannian manifolds | visual object recognition | image classificationGenre/Form: Electronic books.DDC classification: 006.37 Online resources: Abstract with links to resource Also available in print.
Contents:
Part I. Covariance matrices and applications -- 1. Data representation by covariance matrices -- 1.1 Covariance matrices for data representation -- 1.2 Statistical interpretation -- 2. Geometry of SPD matrices -- 2.1 Euclidean distance -- 2.2 Interpretations and motivations for the different invariances -- 2.3 Basic Riemannian geometry -- 2.4 Affine-invariant Riemannian metric on SPD matrices -- 2.4.1 Connection with the Fisher-Rao metric -- 2.5 Log-Euclidean metric -- 2.5.1 Log-Euclidean distance as an approximation of the affine-invariant Riemannian distance -- 2.5.2 Log-Euclidean distance as a Riemannian distance -- 2.5.3 Log-Euclidean vs. Euclidean -- 2.6 Bregman divergences -- 2.6.1 Log-determinant divergences -- 2.6.2 Connection with the Rényi and Kullback-Leibler divergences -- 2.7 Alpha-Beta Log-Det divergences -- 2.8 Power Euclidean metrics -- 2.9 Distances and divergences between empirical covariance matrices -- 2.10 Running time comparison -- 2.11 Summary -- 3. Kernel methods on covariance matrices -- 3.1 Positive definite kernels and reproducing kernel Hilbert spaces -- 3.2 Positive definite kernels on SPD matrices -- 3.2.1 Positive definite kernels with the Euclidean metric -- 3.2.2 Positive definite kernels with the log-Euclidean metric -- 3.2.3 Positive definite kernels with the symmetric Stein divergence -- 3.2.4 Positive definite kernels with the affine-invariant Riemannian metric -- 3.3 Kernel methods on covariance matrices -- 3.4 Experiments on image classification -- 3.4.1 Datasets -- 3.4.2 Results -- 3.5 Related approaches --
Part II. Covariance operators and applications -- 4. Data representation by covariance operators -- 4.1 Positive definite kernels and feature maps -- 4.2 Covariance operators in RKHS -- 4.3 Data representation by RKHS covariance operators -- 5. Geometry of covariance operators -- 5.1 Hilbert-Schmidt distance -- 5.2 Riemannian distances between covariance operators -- 5.2.1 The affine-invariant Riemannian metric -- 5.2.2 Log-Hilbert-Schmidt metric -- 5.3 Infinite-dimensional alpha log-determinant divergences -- 5.4 Summary -- 6. Kernel methods on covariance operators -- 6.1 Positive definite kernels on covariance operators -- 6.1.1 Kernels defined using the Hilbert-Schmidt metric -- 6.1.2 Kernels defined using the log-Hilbert-Schmidt metric -- 6.2 Two-layer kernel machines -- 6.3 Approximate methods -- 6.3.1 Approximate log-Hilbert-Schmidt distance and approximate affine-invariant Riemannian distance -- 6.3.2 Computational complexity -- 6.3.3 Approximate log-Hilbert-Schmidt inner product -- 6.3.4 Two-layer kernel machine with the approximate log-Hilbert-Schmidt distance -- 6.3.5 Case study: approximation by Fourier feature maps -- 6.4 Experiments in image classification -- 6.5 Summary -- 7. Conclusion and future outlook --
A. Supplementary technical information -- Mean squared errors for empirical covariance matrices -- Matrix exponential and principal logarithm Fréchet derivative -- The quasi-random Fourier features -- Low-discrepancy sequences -- The Gaussian case -- Proofs of several mathematical results -- Bibliography -- Authors' biographies.
Abstract: Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the finite-dimensional covariance matrix representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the infinite-dimensional covariance operator representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE798
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 143-154).

Part I. Covariance matrices and applications -- 1. Data representation by covariance matrices -- 1.1 Covariance matrices for data representation -- 1.2 Statistical interpretation -- 2. Geometry of SPD matrices -- 2.1 Euclidean distance -- 2.2 Interpretations and motivations for the different invariances -- 2.3 Basic Riemannian geometry -- 2.4 Affine-invariant Riemannian metric on SPD matrices -- 2.4.1 Connection with the Fisher-Rao metric -- 2.5 Log-Euclidean metric -- 2.5.1 Log-Euclidean distance as an approximation of the affine-invariant Riemannian distance -- 2.5.2 Log-Euclidean distance as a Riemannian distance -- 2.5.3 Log-Euclidean vs. Euclidean -- 2.6 Bregman divergences -- 2.6.1 Log-determinant divergences -- 2.6.2 Connection with the Rényi and Kullback-Leibler divergences -- 2.7 Alpha-Beta Log-Det divergences -- 2.8 Power Euclidean metrics -- 2.9 Distances and divergences between empirical covariance matrices -- 2.10 Running time comparison -- 2.11 Summary -- 3. Kernel methods on covariance matrices -- 3.1 Positive definite kernels and reproducing kernel Hilbert spaces -- 3.2 Positive definite kernels on SPD matrices -- 3.2.1 Positive definite kernels with the Euclidean metric -- 3.2.2 Positive definite kernels with the log-Euclidean metric -- 3.2.3 Positive definite kernels with the symmetric Stein divergence -- 3.2.4 Positive definite kernels with the affine-invariant Riemannian metric -- 3.3 Kernel methods on covariance matrices -- 3.4 Experiments on image classification -- 3.4.1 Datasets -- 3.4.2 Results -- 3.5 Related approaches --

Part II. Covariance operators and applications -- 4. Data representation by covariance operators -- 4.1 Positive definite kernels and feature maps -- 4.2 Covariance operators in RKHS -- 4.3 Data representation by RKHS covariance operators -- 5. Geometry of covariance operators -- 5.1 Hilbert-Schmidt distance -- 5.2 Riemannian distances between covariance operators -- 5.2.1 The affine-invariant Riemannian metric -- 5.2.2 Log-Hilbert-Schmidt metric -- 5.3 Infinite-dimensional alpha log-determinant divergences -- 5.4 Summary -- 6. Kernel methods on covariance operators -- 6.1 Positive definite kernels on covariance operators -- 6.1.1 Kernels defined using the Hilbert-Schmidt metric -- 6.1.2 Kernels defined using the log-Hilbert-Schmidt metric -- 6.2 Two-layer kernel machines -- 6.3 Approximate methods -- 6.3.1 Approximate log-Hilbert-Schmidt distance and approximate affine-invariant Riemannian distance -- 6.3.2 Computational complexity -- 6.3.3 Approximate log-Hilbert-Schmidt inner product -- 6.3.4 Two-layer kernel machine with the approximate log-Hilbert-Schmidt distance -- 6.3.5 Case study: approximation by Fourier feature maps -- 6.4 Experiments in image classification -- 6.5 Summary -- 7. Conclusion and future outlook --

A. Supplementary technical information -- Mean squared errors for empirical covariance matrices -- Matrix exponential and principal logarithm Fréchet derivative -- The quasi-random Fourier features -- Low-discrepancy sequences -- The Gaussian case -- Proofs of several mathematical results -- Bibliography -- Authors' biographies.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the finite-dimensional covariance matrix representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the infinite-dimensional covariance operator representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.

Also available in print.

Title from PDF title page (viewed on November 22, 2017).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha