Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Semi-supervised learning and domain adaptation in natural language processing

By: Søgaard, Anders.
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on human language technologies: # 21.Publisher: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, c2013Description: 1 electronic text (x, 93 p.) : ill., digital file.ISBN: 9781608459865 (electronic bk.).Subject(s): Natural language processing (Computer science) | Supervised learning (Machine learning) | natural language processing | machine learning | learning under bias | semi-supervised learningDDC classification: 006.35 Online resources: Abstract with links to resource | Abstract with links to full text Also available in print.
Contents:
1. Introduction -- 1.1 Introduction -- 1.2 Learning under bias -- 1.3 Empirical evaluations --
2. Supervised and unsupervised prediction -- 2.1 Standard assumptions in supervised learning -- 2.1.1 How to check whether the assumptions hold -- 2.2 Nearest neighbor -- 2.3 Naive Bayes -- 2.4 Perceptron -- 2.4.1 Large-margin methods -- 2.5 Comparisons of classification algorithms -- 2.6 Learning from weighted data -- 2.6.1 Weighted k-nearest neighbor -- 2.6.2 Weighted naive Bayes -- 2.6.3 Weighted perceptron -- 2.6.4 Weighted large-margin learning -- 2.7 Clustering algorithms -- 2.7.1 Hierarchical clustering -- 2.7.2 k-means -- 2.7.3 Expectation maximization -- 2.7.4 Evaluating clustering algorithms -- 2.8 Part-of-speech tagging -- 2.9 Dependency parsing -- 2.9.1 Transition-based dependency parsing -- 2.9.2 Graph-based dependency parsing --
3. Semi-supervised learning -- 3.1 Wrapper methods -- 3.1.1 Self-training -- 3.1.2 Co-training -- 3.1.3 Tri-training -- 3.1.4 Soft self-training, EM and co-EM -- 3.2 Clusters-as-features -- 3.3 Semi-supervised nearest neighbor -- 3.3.1 Label propagation -- 3.3.2 Semi-supervised nearest neighbor editing -- 3.3.3 Semi-supervised condensed nearest neighbor --
4. Learning under bias -- 4.1 Semi-supervised learning as transfer learning -- 4.2 Transferring data -- 4.2.1 Outlier detection -- 4.2.2 Importance weighting -- 4.3 Transferring features -- 4.3.1 Changing feature representation to minimize divergence -- 4.3.2 Structural correspondence learning -- 4.4 Transferring parameters --
5. Learning under unknown bias -- 5.1 Adversarial learning -- 5.2 Ensemble-based methods and meta-learning --
6. Evaluating under bias -- 6.1 What is language? -- 6.2 Significance across corpora -- 6.3 Meta-analysis -- 6.4 Performance and data characteristics -- 6.5 Down-stream evaluation --
Bibliography -- Author's biography.
Abstract: This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE494
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 81-92).

1. Introduction -- 1.1 Introduction -- 1.2 Learning under bias -- 1.3 Empirical evaluations --

2. Supervised and unsupervised prediction -- 2.1 Standard assumptions in supervised learning -- 2.1.1 How to check whether the assumptions hold -- 2.2 Nearest neighbor -- 2.3 Naive Bayes -- 2.4 Perceptron -- 2.4.1 Large-margin methods -- 2.5 Comparisons of classification algorithms -- 2.6 Learning from weighted data -- 2.6.1 Weighted k-nearest neighbor -- 2.6.2 Weighted naive Bayes -- 2.6.3 Weighted perceptron -- 2.6.4 Weighted large-margin learning -- 2.7 Clustering algorithms -- 2.7.1 Hierarchical clustering -- 2.7.2 k-means -- 2.7.3 Expectation maximization -- 2.7.4 Evaluating clustering algorithms -- 2.8 Part-of-speech tagging -- 2.9 Dependency parsing -- 2.9.1 Transition-based dependency parsing -- 2.9.2 Graph-based dependency parsing --

3. Semi-supervised learning -- 3.1 Wrapper methods -- 3.1.1 Self-training -- 3.1.2 Co-training -- 3.1.3 Tri-training -- 3.1.4 Soft self-training, EM and co-EM -- 3.2 Clusters-as-features -- 3.3 Semi-supervised nearest neighbor -- 3.3.1 Label propagation -- 3.3.2 Semi-supervised nearest neighbor editing -- 3.3.3 Semi-supervised condensed nearest neighbor --

4. Learning under bias -- 4.1 Semi-supervised learning as transfer learning -- 4.2 Transferring data -- 4.2.1 Outlier detection -- 4.2.2 Importance weighting -- 4.3 Transferring features -- 4.3.1 Changing feature representation to minimize divergence -- 4.3.2 Structural correspondence learning -- 4.4 Transferring parameters --

5. Learning under unknown bias -- 5.1 Adversarial learning -- 5.2 Ensemble-based methods and meta-learning --

6. Evaluating under bias -- 6.1 What is language? -- 6.2 Significance across corpora -- 6.3 Meta-analysis -- 6.4 Performance and data characteristics -- 6.5 Down-stream evaluation --

Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.

Also available in print.

Title from PDF t.p. (viewed on June 15, 2013).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha