Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Metric learning /

By: Bellet, Aurélien [author.].
Contributor(s): Habrard, Amaury [author.] | Sebban, Marc [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on artificial intelligence and machine learning: # 30.Publisher: San Rafael, California (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, 2015.Description: 1 PDF (xi, 139 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627053662.Subject(s): Machine learning | metric learning | similarity learning | Mahalanobis distance | edit distance | structured data | learning theoryDDC classification: 006.31 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction -- 1.1 Metric learning in a nutshell -- 1.2 Related topics -- 1.3 Prerequisites and notations -- 1.4 Outline --
2. Metrics -- 2.1 General definitions -- 2.2 Commonly used metrics -- 2.2.1 Metrics for numerical data -- 2.2.2 Metrics for structured data -- 2.3 Metrics in machine learning and data mining --
3. Properties of metric learning algorithms --
4. Linear metric learning -- 4.1 Mahalanobis distance learning -- 4.1.1 Early approaches -- 4.1.2 Regularized approaches -- 4.2 Linear similarity learninG -- 4.3 Large-scale metric learning -- 4.3.1 Large n: online, stochastic and distributed optimization -- 4.3.2 Large d: metric learning in high dimensions -- 4.3.3 Large n and large d --
5. Nonlinear and local metric learning -- 5.1 Nonlinear methods -- 5.1.1 Kernelization of linear methods -- 5.1.2 Learning nonlinear forms of metrics -- 5.2 Learning multiple local metrics --
6. Metric learning for special settings -- 6.1 Multi-task and transfer learning -- 6.2 Learning to rank -- 6.3 Semi-supervised learning -- 6.3.1 Classic setting -- 6.3.2 Domain adaptation -- 6.4 Histogram data --
7. Metric learning for structured data -- 7.1 String edit distance learning -- 7.1.1 Probabilistic methods -- 7.1.2 Gradient descent methods -- 7.2 Tree and graph edit distance learning -- 7.3 Metric learning for time series --
8. Generalization guarantees for metric learning -- 8.1 Overview of existing work -- 8.2 Consistency bounds for metric learning -- 8.2.1 Definitions -- 8.2.2 Bounds based on uniform stability -- 8.2.3 Bounds based on algorithmic robustness -- 8.3 Guarantees on classification performance -- 8.3.1 Good similarity learning for linear classification -- 8.3.2 Bounds based on Rademacher complexity --
9. Applications -- 9.1 Computer vision -- 9.2 Bioinformatics -- 9.3 Information retrieval --
10. Conclusion -- 10.1 Summary -- 10.2 Outlook --
A. Proofs of chapter 8 -- Uniform stability -- Algorithmic robustness -- Similarity-based linear classifiers -- Bibliography -- Authors' biographies.
Abstract: Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learning literature that covers algorithms, theory and applications for both numerical and structured data. We first introduce relevant definitions and classic metric functions, as well as examples of their use in machine learning and data mining. We then review a wide range of metric learning algorithms, starting with the simple setting of linear distance and similarity learning. We show how one may scale-up these methods to very large amounts of training data. To go beyond the linear case, we discuss methods that learn nonlinear metrics or multiple linear metrics throughout the feature space, and review methods for more complex settings such as multi-task and semi-supervised learning. Although most of the existing work has focused on numerical data, we cover the literature on metric learning for structured data like strings, trees, graphs and time series. In the more technical part of the book, we present some recent statistical frameworks for analyzing the generalization performance in metric learning and derive results for some of the algorithms presented earlier. Finally, we illustrate the relevance of metric learning in real-world problems through a series of successful applications to computer vision, bioinformatics and information retrieval.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE619
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 115-138).

1. Introduction -- 1.1 Metric learning in a nutshell -- 1.2 Related topics -- 1.3 Prerequisites and notations -- 1.4 Outline --

2. Metrics -- 2.1 General definitions -- 2.2 Commonly used metrics -- 2.2.1 Metrics for numerical data -- 2.2.2 Metrics for structured data -- 2.3 Metrics in machine learning and data mining --

3. Properties of metric learning algorithms --

4. Linear metric learning -- 4.1 Mahalanobis distance learning -- 4.1.1 Early approaches -- 4.1.2 Regularized approaches -- 4.2 Linear similarity learninG -- 4.3 Large-scale metric learning -- 4.3.1 Large n: online, stochastic and distributed optimization -- 4.3.2 Large d: metric learning in high dimensions -- 4.3.3 Large n and large d --

5. Nonlinear and local metric learning -- 5.1 Nonlinear methods -- 5.1.1 Kernelization of linear methods -- 5.1.2 Learning nonlinear forms of metrics -- 5.2 Learning multiple local metrics --

6. Metric learning for special settings -- 6.1 Multi-task and transfer learning -- 6.2 Learning to rank -- 6.3 Semi-supervised learning -- 6.3.1 Classic setting -- 6.3.2 Domain adaptation -- 6.4 Histogram data --

7. Metric learning for structured data -- 7.1 String edit distance learning -- 7.1.1 Probabilistic methods -- 7.1.2 Gradient descent methods -- 7.2 Tree and graph edit distance learning -- 7.3 Metric learning for time series --

8. Generalization guarantees for metric learning -- 8.1 Overview of existing work -- 8.2 Consistency bounds for metric learning -- 8.2.1 Definitions -- 8.2.2 Bounds based on uniform stability -- 8.2.3 Bounds based on algorithmic robustness -- 8.3 Guarantees on classification performance -- 8.3.1 Good similarity learning for linear classification -- 8.3.2 Bounds based on Rademacher complexity --

9. Applications -- 9.1 Computer vision -- 9.2 Bioinformatics -- 9.3 Information retrieval --

10. Conclusion -- 10.1 Summary -- 10.2 Outlook --

A. Proofs of chapter 8 -- Uniform stability -- Algorithmic robustness -- Similarity-based linear classifiers -- Bibliography -- Authors' biographies.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learning literature that covers algorithms, theory and applications for both numerical and structured data. We first introduce relevant definitions and classic metric functions, as well as examples of their use in machine learning and data mining. We then review a wide range of metric learning algorithms, starting with the simple setting of linear distance and similarity learning. We show how one may scale-up these methods to very large amounts of training data. To go beyond the linear case, we discuss methods that learn nonlinear metrics or multiple linear metrics throughout the feature space, and review methods for more complex settings such as multi-task and semi-supervised learning. Although most of the existing work has focused on numerical data, we cover the literature on metric learning for structured data like strings, trees, graphs and time series. In the more technical part of the book, we present some recent statistical frameworks for analyzing the generalization performance in metric learning and derive results for some of the algorithms presented earlier. Finally, we illustrate the relevance of metric learning in real-world problems through a series of successful applications to computer vision, bioinformatics and information retrieval.

Also available in print.

Title from PDF title page (viewed on February 22, 2015).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha