000 -LEADER |
fixed length control field |
12078nam a2200865 i 4500 |
001 - CONTROL NUMBER |
control field |
7909255 |
003 - CONTROL NUMBER IDENTIFIER |
control field |
IEEE |
005 - DATE AND TIME OF LATEST TRANSACTION |
control field |
20200413152924.0 |
006 - FIXED-LENGTH DATA ELEMENTS--ADDITIONAL MATERIAL CHARACTERISTICS |
fixed length control field |
m eo d |
007 - PHYSICAL DESCRIPTION FIXED FIELD--GENERAL INFORMATION |
fixed length control field |
cr cn |||m|||a |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION |
fixed length control field |
170418s2017 caua foab 000 0 eng d |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER |
International Standard Book Number |
9781627052955 |
Qualifying information |
ebook |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER |
Canceled/invalid ISBN |
9781627052986 |
Qualifying information |
print |
024 7# - OTHER STANDARD IDENTIFIER |
Standard number or code |
10.2200/S00762ED1V01Y201703HLT037 |
Source of number or code |
doi |
035 ## - SYSTEM CONTROL NUMBER |
System control number |
(CaBNVSL)swl00407294 |
035 ## - SYSTEM CONTROL NUMBER |
System control number |
(OCoLC)982699889 |
040 ## - CATALOGING SOURCE |
Original cataloging agency |
CaBNVSL |
Language of cataloging |
eng |
Description conventions |
rda |
Transcribing agency |
CaBNVSL |
Modifying agency |
CaBNVSL |
050 #4 - LIBRARY OF CONGRESS CALL NUMBER |
Classification number |
QA76.9.N38 |
Item number |
G655 2017 |
082 04 - DEWEY DECIMAL CLASSIFICATION NUMBER |
Classification number |
006.35 |
Edition number |
23 |
100 1# - MAIN ENTRY--PERSONAL NAME |
Personal name |
Goldberg, Yoav, |
Relator term |
author. |
245 10 - TITLE STATEMENT |
Title |
Neural network methods for natural language processing / |
Statement of responsibility, etc. |
Yoav Goldberg. |
264 #1 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE |
Place of production, publication, distribution, manufacture |
[San Rafael, California] : |
Name of producer, publisher, distributor, manufacturer |
Morgan & Claypool, |
Date of production, publication, distribution, manufacture, or copyright notice |
2017. |
300 ## - PHYSICAL DESCRIPTION |
Extent |
1 PDF (xxii, 287 pages) : |
Other physical details |
illustrations. |
336 ## - CONTENT TYPE |
Content type term |
text |
Source |
rdacontent |
337 ## - MEDIA TYPE |
Media type term |
electronic |
Source |
isbdmedia |
338 ## - CARRIER TYPE |
Carrier type term |
online resource |
Source |
rdacarrier |
490 1# - SERIES STATEMENT |
Series statement |
Synthesis lectures on human language technologies, |
International Standard Serial Number |
1947-4059 ; |
Volume/sequential designation |
# 37 |
538 ## - SYSTEM DETAILS NOTE |
System details note |
System requirements: Adobe Acrobat Reader. |
538 ## - SYSTEM DETAILS NOTE |
System details note |
Mode of access: World Wide Web. |
500 ## - GENERAL NOTE |
General note |
Part of: Synthesis digital library of engineering and computer science. |
504 ## - BIBLIOGRAPHY, ETC. NOTE |
Bibliography, etc. note |
Includes bibliographical references (pages 253-285). |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
21. Conclusion -- 21.1 What have we seen? -- 21.2 The challenges ahead -- Bibliography -- Author's biography. |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
20. Cascaded, multi-task and semi-supervised learning -- 20.1 Model cascading -- 20.2 Multi-task learning -- 20.2.1 Training in a multi-task setup -- 20.2.2 Selective sharing -- 20.2.3 Word-embeddings pre-training as multi-task learning -- 20.2.4 Multi-task learning in conditioned generation -- 20.2.5 Multi-task learning as regularization -- 20.2.6 Caveats -- 20.3 Semi-supervised learning -- 20.4 Examples -- 20.4.1 Gaze-prediction and sentence compression -- 20.4.2 Arc labeling and syntactic parsing -- 20.4.3 Preposition sense disambiguation and preposition translation prediction -- 20.4.4 Conditioned generation: multilingual machine translation, parsing, and image captioning -- 20.5 Outlook -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
19. Structured output prediction -- 19.1 Search-based structured prediction -- 19.1.1 Structured prediction with linear models -- 19.1.2 Nonlinear structured prediction -- 19.1.3 Probabilistic objective (CRF) -- 19.1.4 Approximate search -- 19.1.5 Reranking -- 19.1.6 See also -- 19.2 Greedy structured prediction -- 19.3 Conditional generation as structured output prediction -- 19.4 Examples -- 19.4.1 Search-based structured prediction: first-order dependency parsing -- 19.4.2 Neural-CRF for named entity recognition -- 19.4.3 Approximate NER-CRF with beam-search -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
Part IV. Additional topics -- 18. Modeling trees with recursive neural networks -- 18.1 Formal definition -- 18.2 Extensions and variations -- 18.3 Training recursive neural networks -- 18.4 A simple alternative-linearized trees -- 18.5 Outlook -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
17. Conditioned generation -- 17.1 RNN generators -- 17.1.1 Training generators -- 17.2 Conditioned generation (encoder-decoder) -- 17.2.1 Sequence to sequence models -- 17.2.2 Applications -- 17.2.3 Other conditioning contexts -- 17.3 Unsupervised sentence similarity -- 17.4 Conditioned generation with attention -- 17.4.1 Computational complexity -- 17.4.2 Interpretability -- 17.5 Attention-based models in NLP -- 17.5.1 Machine translation -- 17.5.2 Morphological inflection -- 17.5.3 Syntactic parsing -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
16. Modeling with recurrent networks -- 16.1 Acceptors -- 16.1.1 Sentiment classification -- 16.1.2 Subject-verb agreement grammaticality detection -- 16.2 RNNs as feature extractors -- 16.2.1 Part-of-speech tagging -- 16.2.2 RNN-CNN document classification -- 16.2.3 Arc-factored dependency parsing -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
15. Concrete recurrent neural network architectures -- 15.1 CBOW as an RNN -- 15.2 Simple RNN -- 15.3 Gated architectures -- 15.3.1 LSTM -- 15.3.2 GRU -- 15.4 Other variants -- 15.5 Dropout in RNNs -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
Part III. Specialized architectures -- 13. Ngram detectors: convolutional neural networks -- 13.1 Basic convolution + pooling -- 13.1.1 1D convolutions over text -- 13.1.2 Vector pooling -- 13.1.3 Variations -- 13.2 Alternative: feature hashing -- 13.3 Hierarchical convolutions -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
12. Case study: a feed-forward architecture for sentence meaning inference -- 12.1 Natural language inference and the SNLI dataset -- 12.2 A textual similarity network -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
11. Using word embeddings -- 11.1 Obtaining word vectors -- 11.2 Word similarity -- 11.3 Word clustering -- 11.4 Finding similar words -- 11.4.1 Similarity to a group of words -- 11.5 Odd-one out -- 11.6 Short document similarity -- 11.7 Word analogies -- 11.8 Retrofitting and projections -- 11.9 Practicalities and pitfalls -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
10. Pre-trained word representations -- 10.1 Random initialization -- 10.2 Supervised task-specific pre-training -- 10.3 Unsupervised pre-training -- 10.3.1 Using pre-trained embeddings -- 10.4 Word embedding algorithms -- 10.4.1 Distributional hypothesis and word representations -- 10.4.2 From neural language models to distributed representations -- 10.4.3 Connecting the worlds -- 10.4.4 Other algorithms -- 10.5 The choice of contexts -- 10.5.1 Window approach -- 10.5.2 Sentences, paragraphs, or documents -- 10.5.3 Syntactic window -- 10.5.4 Multilingual -- 10.5.5 Character-based and sub-word representations -- 10.6 Dealing with multi-word units and word inflections -- 10.7 Limitations of distributional methods -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
9. Language modeling -- 9.1 The language modeling task -- 9.2 Evaluating language models: perplexity -- 9.3 Traditional approaches to language modeling -- 9.3.1 Further reading -- 9.3.2 Limitations of traditional language models -- 9.4 Neural language models -- 9.5 Using language models for generation -- 9.6 Byproduct: word representations -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
8. From textual features to inputs -- 8.1 Encoding categorical features -- 8.1.1 One-hot encodings -- 8.1.2 Dense encodings (feature embeddings) -- 8.1.3 Dense vectors vs. one-hot representations -- 8.2 Combining dense vectors -- 8.2.1 Window-based features -- 8.2.2 Variable number of features: continuous bag of words -- 8.3 Relation between one-hot and dense vectors -- 8.4 Odds and ends -- 8.4.1 Distance and position features -- 8.4.2 Padding, unknown words, and word dropout -- 8.4.3 Feature combinations -- 8.4.4 Vector sharing -- 8.4.5 Dimensionality -- 8.4.6 Embeddings vocabulary -- 8.4.7 Network's output -- 8.5 Example: part-of-speech tagging -- 8.6 Example: arc-factored parsing -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
7. Case studies of NLP features -- 7.1 Document classification: language identification -- 7.2 Document classification: topic classification -- 7.3 Document classification: authorship attribution -- 7.4 Word-in-context: part of speech tagging -- 7.5 Word-in-context: named entity recognition -- 7.6 Word in context, linguistic features: preposition sense disambiguation -- 7.7 Relation between words in context: arc-factored parsing -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
Part II. Working with natural language data -- 6. Features for textual data -- 6.1 Typology of NLP classification problems -- 6.2 Features for NLP problems -- 6.2.1 Directly observable properties -- 6.2.2 Inferred linguistic properties -- 6.2.3 Core features vs. combination features -- 6.2.4 Ngram features -- 6.2.5 Distributional features -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
5. Neural network training -- 5.1 The computation graph abstraction -- 5.1.1 Forward computation -- 5.1.2 Backward computation (derivatives, backprop) -- 5.1.3 Software -- 5.1.4 Implementation recipe -- 5.1.5 Network composition -- 5.2 Practicalities -- 5.2.1 Choice of optimization algorithm -- 5.2.2 Initialization -- 5.2.3 Restarts and ensembles -- 5.2.4 Vanishing and exploding gradients -- 5.2.5 Saturation and dead neurons -- 5.2.6 Shuffling -- 5.2.7 Learning rate -- 5.2.8 Minibatches -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
4. Feed-forward neural networks -- 4.1 A brain-inspired metaphor -- 4.2 In mathematical notation -- 4.3 Representation power -- 4.4 Common nonlinearities -- 4.5 Loss functions -- 4.6 Regularization and dropout -- 4.7 Similarity and distance layers -- 4.8 Embedding layers -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
3. From linear models to multi-layer perceptrons -- 3.1 Limitations of linear models: The XOR problem -- 3.2 Nonlinear input transformations -- 3.3 Kernel methods -- 3.4 Trainable mapping functions -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
Part I. Supervised classification and feed-forward neural networks -- 2. Learning basics and linear models -- 2.1 Supervised learning and parameterized functions -- 2.2 Train, test, and validation sets -- 2.3 Linear models -- 2.3.1 Binary classification -- 2.3.2 Log-linear binary classification -- 2.3.3 Multi-class classification -- 2.4 Representations -- 2.5 One-hot and dense vector representations -- 2.6 Log-linear multi-class classification -- 2.7 Training as optimization -- 2.7.1 Loss functions -- 2.7.2 Regularization -- 2.8 Gradient-based optimization -- 2.8.1 Stochastic gradient descent -- 2.8.2 Worked-out example -- 2.8.3 Beyond SGD -- |
505 8# - FORMATTED CONTENTS NOTE |
Formatted contents note |
14. Recurrent neural networks: modeling sequences and stacks -- 14.1 The RNN abstraction -- 14.2 RNN training -- 14.3 Common RNN usage-patterns -- 14.3.1 Acceptor -- 14.3.2 Encoder -- 14.3.3 Transducer -- 14.4 Bidirectional RNNs (biRNN) -- 14.5 Multi-layer (stacked) RNNs -- 14.6 RNNs for representing stacks -- 14.7 A note on reading the literature -- |
505 0# - FORMATTED CONTENTS NOTE |
Formatted contents note |
1. Introduction -- 1.1 The challenges of natural language processing -- 1.2 Neural networks and deep learning -- 1.3 Deep learning in NLP -- 1.3.1 Success stories -- 1.4 Coverage and organization -- 1.5 What's not covered -- 1.6 A note on terminology -- 1.7 Mathematical notation -- |
506 ## - RESTRICTIONS ON ACCESS NOTE |
Terms governing access |
Abstract freely available; full-text restricted to subscribers or individual document purchasers. |
510 0# - CITATION/REFERENCES NOTE |
Name of source |
Google book search |
510 0# - CITATION/REFERENCES NOTE |
Name of source |
Google scholar |
510 0# - CITATION/REFERENCES NOTE |
Name of source |
INSPEC |
510 0# - CITATION/REFERENCES NOTE |
Name of source |
Compendex |
520 3# - SUMMARY, ETC. |
Summary, etc. |
Neural networks are a family of powerful machine learning models. This book focuses on the application of neural network models to natural language data. The first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. The second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. These architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning. |
530 ## - ADDITIONAL PHYSICAL FORM AVAILABLE NOTE |
Additional physical form available note |
Also available in print. |
588 ## - SOURCE OF DESCRIPTION NOTE |
Source of description note |
Title from PDF title page (viewed on April 18, 2017). |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name entry element |
Neural networks (Computer science) |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name entry element |
Natural language processing (Computer science) |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
sequence to sequence models |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
recurrent neural networks |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
word embeddings |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
neural networks |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
deep learning |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
supervised learning |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
machine learning |
653 ## - INDEX TERM--UNCONTROLLED |
Uncontrolled term |
natural language processing |
776 08 - ADDITIONAL PHYSICAL FORM ENTRY |
Relationship information |
Print version: |
International Standard Book Number |
9781627052986 |
830 #0 - SERIES ADDED ENTRY--UNIFORM TITLE |
Uniform title |
Synthesis lectures on human language technologies ; |
Volume/sequential designation |
# 37. |
International Standard Serial Number |
1947-4059 |
830 #0 - SERIES ADDED ENTRY--UNIFORM TITLE |
Uniform title |
Synthesis digital library of engineering and computer science. |
856 42 - ELECTRONIC LOCATION AND ACCESS |
Materials specified |
Abstract with links to resource |
Uniform Resource Identifier |
http://ieeexplore.ieee.org/servlet/opac?bknumber=7909255 |