Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Introduction to semi-supervised learning

By: Zhu, Xiaojin.
Contributor(s): Goldberg, Andrew B.
Material type: materialTypeLabelBookSeries: Synthesis lectures on artificial intelligence and machine learning: # 6.Publisher: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool Publishers, c2009Description: 1 electronic text (xi, 116 p. : ill.) : digital file.ISBN: 9781598295481 (electronic bk.).Uniform titles: Synthesis digital library of engineering and computer science. Subject(s): Supervised learning (Machine learning) | Support vector machines | Semi-supervised learning | Transductive learning | Self-training | Gaussian mixture model | Expectation maximization (EM) | Cluster-then-label | Co-training | Multiview learning | Mincut | Harmonic function | Label propagation | Manifold regularization | Semi-supervised support vector machines (S3VM) | Transductive support vector machines (TSVM) | Entropy regularization | Human semi-supervised learningDDC classification: 006.31 Online resources: Abstract with links to resource Also available in print.
Contents:
Introduction to statistical machine learning -- The data -- Unsupervised learning -- Supervised learning -- Overview of semi-supervised learning -- Learning from both labeled and unlabeled data -- How is semi-supervised learning possible -- Inductive vs. transductive semi-supervised learning -- Caveats -- Self-training models -- Mixture models and EM -- Mixture models for supervised classification -- Mixture models for semi-supervised classification -- Optimization with the EM algorithm -- The assumptions of mixture models -- Other issues in generative models -- Cluster-then-label methods -- Co-training -- Two views of an instance -- Co-training -- The assumptions of co-training -- Multiview learning -- Graph-based semi-supervised learning -- Unlabeled data as stepping stones -- The graph -- Mincut -- Harmonic function -- Manifold regularization -- The assumption of graph-based methods -- Semi-supervised support vector machines -- Support vector machines -- Semi-supervised support vector machines -- Entropy regularization -- The assumption of S3VMS and entropy regularization -- Human semi-supervised learning -- From machine learning to cognitive science -- Study one: humans learn from unlabeled test data -- Study two: presence of human semi-supervised learning in a simple task -- Study three: absence of human semi-supervised learning in a complex task -- Discussions -- Theory and outlook -- A simple PAC bound for supervised learning -- A simple PAC bound for semi-supervised learning -- Future directions of semi-supervised learning -- Basic mathematical reference -- Semi-supervised learning software -- Symbols -- Biography.
Abstract: Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled. The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semisupervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semisupervised learning, and we conclude the book with a brief discussion of open questions in the field.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE190
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 95-112) and index.

Introduction to statistical machine learning -- The data -- Unsupervised learning -- Supervised learning -- Overview of semi-supervised learning -- Learning from both labeled and unlabeled data -- How is semi-supervised learning possible -- Inductive vs. transductive semi-supervised learning -- Caveats -- Self-training models -- Mixture models and EM -- Mixture models for supervised classification -- Mixture models for semi-supervised classification -- Optimization with the EM algorithm -- The assumptions of mixture models -- Other issues in generative models -- Cluster-then-label methods -- Co-training -- Two views of an instance -- Co-training -- The assumptions of co-training -- Multiview learning -- Graph-based semi-supervised learning -- Unlabeled data as stepping stones -- The graph -- Mincut -- Harmonic function -- Manifold regularization -- The assumption of graph-based methods -- Semi-supervised support vector machines -- Support vector machines -- Semi-supervised support vector machines -- Entropy regularization -- The assumption of S3VMS and entropy regularization -- Human semi-supervised learning -- From machine learning to cognitive science -- Study one: humans learn from unlabeled test data -- Study two: presence of human semi-supervised learning in a simple task -- Study three: absence of human semi-supervised learning in a complex task -- Discussions -- Theory and outlook -- A simple PAC bound for supervised learning -- A simple PAC bound for semi-supervised learning -- Future directions of semi-supervised learning -- Basic mathematical reference -- Semi-supervised learning software -- Symbols -- Biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled. The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semisupervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semisupervised learning, and we conclude the book with a brief discussion of open questions in the field.

Also available in print.

Title from PDF t.p. (viewed on July 8, 2009).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha