Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Extreme value theory-based methods for visual recognition /

By: Scheirer, Walter J [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on computer vision: # 10.Publisher: [San Rafael, California] : Morgan & Claypool, 2017.Description: 1 PDF (xv, 115 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627057035.Subject(s): Computer vision | Extreme value theory | Visual perception | visual recognition | extreme value theory | machine learning | statistical methods | decision making | failure prediction | information fusion | score normalization | open set recognition | object recognition | information retrieval | biometrics | deep learningDDC classification: 006.37 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Extrema and visual recognition -- 1.1 An alternative to central tendency modeling -- 1.2 Background -- 1.3 Extreme value theory for recognition -- 1.4 Decision making in machine learning -- 1.5 Organization --
2. A brief introduction to statistical extreme value theory -- 2.1 Basic concepts -- 2.2 The extreme value theorem -- 2.3 Distributions in the EVT family -- 2.3.1 Gumbel distribution -- 2.3.2 Fréchet distribution -- 2.3.3 Weibull distribution -- 2.3.4 Generalized extreme value distribution -- 2.3.5 Rayleigh distribution -- 2.3.6 Generalized Pareto distribution -- 2.4 Tail size estimation -- 2.5 The i.i.d. assumption and visual data --
3. Post-recognition score analysis -- 3.1 Failure prediction for recognition systems -- 3.2 Meta-recognition -- 3.2.1 A formal model of recognition -- 3.2.2 Meta-recognition as hypothesis testing -- 3.2.3 Weibull-based meta-recognition -- 3.2.4 Validation tools for meta-recognition -- 3.3 Uses of meta-recognition for visual recognition --
4. Recognition score normalization -- 4.1 Goals of good recognition score normalization -- 4.2 W-score normalization -- 4.3 Empirical evaluation of W-score fusion -- 4.4 Other instantiations of EVT normalization -- 4.4.1 GEV-based normalization: extreme value sample consensus -- 4.4.2 GEV-based normalization: GEV-Kmeans -- 4.4.3 Pareto-based normalization: image retrieval as outlier detection -- 4.4.4 Pareto-based normalization: visual inspection via anomaly detection --
5. Calibration of supervised machine learning algorithms -- 5.1 Goals of calibration for decision making -- 5.2 Probability of exclusion: multi-attribute spaces -- 5.3 Open set recognition -- 5.3.1 Probability of inclusion: PI-SVM -- 5.3.2 Probability of inclusion and exclusion: W-SVM -- 5.3.3 Sparse representation-based open set recognition -- 5.3.4 EVT calibration for deep networks: OpenMax --
6. Summary and future directions -- Bibliography -- Author's biography.
Abstract: A common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE748
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 99-114).

1. Extrema and visual recognition -- 1.1 An alternative to central tendency modeling -- 1.2 Background -- 1.3 Extreme value theory for recognition -- 1.4 Decision making in machine learning -- 1.5 Organization --

2. A brief introduction to statistical extreme value theory -- 2.1 Basic concepts -- 2.2 The extreme value theorem -- 2.3 Distributions in the EVT family -- 2.3.1 Gumbel distribution -- 2.3.2 Fréchet distribution -- 2.3.3 Weibull distribution -- 2.3.4 Generalized extreme value distribution -- 2.3.5 Rayleigh distribution -- 2.3.6 Generalized Pareto distribution -- 2.4 Tail size estimation -- 2.5 The i.i.d. assumption and visual data --

3. Post-recognition score analysis -- 3.1 Failure prediction for recognition systems -- 3.2 Meta-recognition -- 3.2.1 A formal model of recognition -- 3.2.2 Meta-recognition as hypothesis testing -- 3.2.3 Weibull-based meta-recognition -- 3.2.4 Validation tools for meta-recognition -- 3.3 Uses of meta-recognition for visual recognition --

4. Recognition score normalization -- 4.1 Goals of good recognition score normalization -- 4.2 W-score normalization -- 4.3 Empirical evaluation of W-score fusion -- 4.4 Other instantiations of EVT normalization -- 4.4.1 GEV-based normalization: extreme value sample consensus -- 4.4.2 GEV-based normalization: GEV-Kmeans -- 4.4.3 Pareto-based normalization: image retrieval as outlier detection -- 4.4.4 Pareto-based normalization: visual inspection via anomaly detection --

5. Calibration of supervised machine learning algorithms -- 5.1 Goals of calibration for decision making -- 5.2 Probability of exclusion: multi-attribute spaces -- 5.3 Open set recognition -- 5.3.1 Probability of inclusion: PI-SVM -- 5.3.2 Probability of inclusion and exclusion: W-SVM -- 5.3.3 Sparse representation-based open set recognition -- 5.3.4 EVT calibration for deep networks: OpenMax --

6. Summary and future directions -- Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

A common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms.

Also available in print.

Title from PDF title page (viewed on February 24, 2017).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha