000 05956nam a2200649 i 4500
001 6813752
003 IEEE
005 20200413152910.0
006 m eo d
007 cr cn |||m|||a
008 130615s2013 caua foab 000 0 eng d
020 _a9781608459865 (electronic bk.)
020 _z9781608459858 (pbk.)
024 7 _a10.2200/S00497ED1V01Y201304HLT021
_2doi
035 _a(CaBNVSL)swl00402475
035 _a(OCoLC)848841958
040 _aCaBNVSL
_cCaBNVSL
_dCaBNVSL
050 4 _aQA76.9.N38
_bS647 2013
082 0 4 _a006.35
_223
090 _a
_bMoCl
_e201304HLT021
100 1 _aSøgaard, Anders.
245 1 0 _aSemi-supervised learning and domain adaptation in natural language processing
_h[electronic resource] /
_cAnders Søgaard.
260 _aSan Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) :
_bMorgan & Claypool,
_cc2013.
300 _a1 electronic text (x, 93 p.) :
_bill., digital file.
490 1 _aSynthesis lectures on human language technologies,
_x1947-4059 ;
_v# 21
538 _aMode of access: World Wide Web.
538 _aSystem requirements: Adobe Acrobat Reader.
500 _aPart of: Synthesis digital library of engineering and computer science.
500 _aSeries from website.
504 _aIncludes bibliographical references (p. 81-92).
505 0 _a1. Introduction -- 1.1 Introduction -- 1.2 Learning under bias -- 1.3 Empirical evaluations --
505 8 _a2. Supervised and unsupervised prediction -- 2.1 Standard assumptions in supervised learning -- 2.1.1 How to check whether the assumptions hold -- 2.2 Nearest neighbor -- 2.3 Naive Bayes -- 2.4 Perceptron -- 2.4.1 Large-margin methods -- 2.5 Comparisons of classification algorithms -- 2.6 Learning from weighted data -- 2.6.1 Weighted k-nearest neighbor -- 2.6.2 Weighted naive Bayes -- 2.6.3 Weighted perceptron -- 2.6.4 Weighted large-margin learning -- 2.7 Clustering algorithms -- 2.7.1 Hierarchical clustering -- 2.7.2 k-means -- 2.7.3 Expectation maximization -- 2.7.4 Evaluating clustering algorithms -- 2.8 Part-of-speech tagging -- 2.9 Dependency parsing -- 2.9.1 Transition-based dependency parsing -- 2.9.2 Graph-based dependency parsing --
505 8 _a3. Semi-supervised learning -- 3.1 Wrapper methods -- 3.1.1 Self-training -- 3.1.2 Co-training -- 3.1.3 Tri-training -- 3.1.4 Soft self-training, EM and co-EM -- 3.2 Clusters-as-features -- 3.3 Semi-supervised nearest neighbor -- 3.3.1 Label propagation -- 3.3.2 Semi-supervised nearest neighbor editing -- 3.3.3 Semi-supervised condensed nearest neighbor --
505 8 _a4. Learning under bias -- 4.1 Semi-supervised learning as transfer learning -- 4.2 Transferring data -- 4.2.1 Outlier detection -- 4.2.2 Importance weighting -- 4.3 Transferring features -- 4.3.1 Changing feature representation to minimize divergence -- 4.3.2 Structural correspondence learning -- 4.4 Transferring parameters --
505 8 _a5. Learning under unknown bias -- 5.1 Adversarial learning -- 5.2 Ensemble-based methods and meta-learning --
505 8 _a6. Evaluating under bias -- 6.1 What is language? -- 6.2 Significance across corpora -- 6.3 Meta-analysis -- 6.4 Performance and data characteristics -- 6.5 Down-stream evaluation --
505 8 _aBibliography -- Author's biography.
506 1 _aAbstract freely available; full-text restricted to subscribers or individual document purchasers.
510 0 _aCompendex
510 0 _aINSPEC
510 0 _aGoogle scholar
510 0 _aGoogle book search
520 3 _aThis book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.
530 _aAlso available in print.
588 _aTitle from PDF t.p. (viewed on June 15, 2013).
650 0 _aNatural language processing (Computer science)
650 0 _aSupervised learning (Machine learning)
653 _anatural language processing
653 _amachine learning
653 _alearning under bias
653 _asemi-supervised learning
776 0 8 _iPrint version:
_z9781608459858
830 0 _aSynthesis digital library of engineering and computer science.
830 0 _aSynthesis lectures on human language technologies ;
_v# 21.
_x1947-4059
856 4 2 _3Abstract with links to resource
_uhttp://ieeexplore.ieee.org/servlet/opac?bknumber=6813752
856 4 0 _3Abstract with links to full text
_uhttp://dx.doi.org/10.2200/S00497ED1V01Y201304HLT021
999 _c561994
_d561994