Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Lifelong machine learning /

By: Chen, Zhiyuan (Computer scientist) [author.].
Contributor(s): Liu, Bing 1963-, [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on artificial intelligence and machine learning: # 38.Publisher: [San Rafael, California] : Morgan & Claypool, 2018.Edition: Second edition.Description: 1 PDF (xix, 187 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781681733036.Subject(s): Machine learning | lifelong machine learning | lifelong learning | continuous learning | continual learning | meta-learning | never-ending learning | multi-task learning | transfer learningDDC classification: 006.31 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction -- 1.1 Classic machine learning paradigm -- 1.2 Motivating examples -- 1.3 A brief history of lifelong learning -- 1.4 Definition of lifelong learning -- 1.5 Types of knowledge and key challenges -- 1.6 Evaluation methodology and role of big data -- 1.7 Outline of the book --
2. Related learning paradigms -- 2.1 Transfer learning -- 2.1.1 Structural correspondence learning -- 2.1.2 Naïve Bayes transfer classifier -- 2.1.3 Deep learning in transfer learning -- 2.1.4 Difference from lifelong learning -- 2.2 Multi-task learning -- 2.2.1 Task relatedness in multi-task learning -- 2.2.2 GO-MTL: multi-task learning using latent basis -- 2.2.3 Deep learning in multi-task learning -- 2.2.4 Difference from lifelong learning -- 2.3 Online learning -- 2.3.1 Difference from lifelong learning -- 2.4 Reinforcement learning -- 2.4.1 Difference from lifelong learning -- 2.5 Meta learning -- 2.5.1 Difference from lifelong learning -- 2.6 Summary --
3. Lifelong supervised learning -- 3.1 Definition and overview -- 3.2 Lifelong memory-based learning -- 3.2.1 Two memory-based learning methods -- 3.2.2 Learning a new representation for lifelong learning -- 3.3 Lifelong neural networks -- 3.3.1 MTL net -- 3.3.2 Lifelong EBNN -- 3.4 ELLA: an efficient lifelong learning algorithm -- 3.4.1 Problem setting -- 3.4.2 Objective function -- 3.4.3 Dealing with the first inefficiency -- 3.4.4 Dealing with the second inefficiency -- 3.4.5 Active task selection -- 3.5 Lifelong naive Bayesian classification -- 3.5.1 Naïve Bayesian text classification -- 3.5.2 Basic ideas of LSC -- 3.5.3 LSC technique -- 3.5.4 Discussions -- 3.6 Domain word embedding via meta-learning -- 3.7 Summary and evaluation datasets --
4. Continual learning and catastrophic forgetting -- 4.1 Catastrophic forgetting -- 4.2 Continual learning in neural networks -- 4.3 Learning without forgetting -- 4.4 Progressive neural networks -- 4.5 Elastic weight consolidation -- 4.6 iCaRL: incremental classifier and representation learning -- 4.6.1 Incremental training -- 4.6.2 Updating representation -- 4.6.3 Constructing exemplar sets for new classes -- 4.6.4 Performing classification in iCaRL -- 4.7 Expert gate -- 4.7.1 Autoencoder gate -- 4.7.2 Measuring task relatedness for training -- 4.7.3 Selecting the most relevant expert for testing -- 4.7.4 Encoder-based lifelong learning -- 4.8 Continual learning with generative replay -- 4.8.1 Generative adversarial networks -- 4.8.2 Generative replay -- 4.9 Evaluating catastrophic forgetting -- 4.10 Summary and evaluation datasets --
5. Open-world learning -- 5.1 Problem definition and applications -- 5.2 Center-based similarity space learning -- 5.2.1 Incrementally updating a CBS learning model -- 5.2.2 Testing a CBS learning model -- 5.2.3 CBS learning for unseen class detection -- 5.3 DOC: deep open classification -- 5.3.1 Feed-forward layers and the 1-vs.-rest layer -- 5.3.2 Reducing open-space risk -- 5.3.3 DOC for image classification -- 5.3.4 Unseen class discovery -- 5.4 Summary and evaluation datasets --
6. Lifelong topic modeling -- 6.1 Main ideas of lifelong topic modeling -- 6.2 LTM: a lifelong topic model -- 6.2.1 LTM model -- 6.2.2 Topic knowledge mining -- 6.2.3 Incorporating past knowledge -- 6.2.4 Conditional distribution of Gibbs sampler -- 6.3 AMC: a lifelong topic model for small data -- 6.3.1 Overall algorithm of AMC -- 6.3.2 Mining must-link knowledge -- 6.3.3 Mining cannot-link knowledge -- 6.3.4 Extended Pólya Urn model -- 6.3.5 Sampling distributions in Gibbs sampler -- 6.4 Summary and evaluation datasets --
7. Lifelong information extraction -- 7.1 NELL: a never-ending language learner -- 7.1.1 NELL architecture -- 7.1.2 Extractors and learning in NELL -- 7.1.3 Coupling constraints in NELL -- 7.2 Lifelong opinion target extraction -- 7.2.1 Lifelong learning through recommendation -- 7.2.2 AER algorithm -- 7.2.3 Knowledge learning -- 7.2.4 Recommendation using past knowledge -- 7.3 Learning on the job -- 7.3.1 Conditional random fields -- 7.3.2 General dependency feature -- 7.3.3 The L-CRF algorithm -- 7.4 Lifelong-RL: lifelong relaxation labeling -- 7.4.1 Relaxation labeling -- 7.4.2 Lifelong relaxation labeling -- 7.5 Summary and evaluation datasets --
8. Continuous knowledge learning in chatbots -- 8.1 LiLi: lifelong interactive learning and inference -- 8.2 Basic ideas of LiLi -- 8.3 Components of LiLi -- 8.4 A running example -- 8.5 Summary and evaluation datasets --
9. Lifelong reinforcement learning -- 9.1 Lifelong reinforcement learning through multiple environments -- 9.1.1 Acquiring and incorporating bias -- 9.2 Hierarchical Bayesian lifelong reinforcement learning -- 9.2.1 Motivation -- 9.2.2 Hierarchical Bayesian approach -- 9.2.3 MTRL algorithm -- 9.2.4 Updating hierarchical model parameters -- 9.2.5 Sampling an MDP -- 9.3 PG-ELLA: lifelong policy gradient reinforcement learning -- 9.3.1 Policy gradient reinforcement learning -- 9.3.2 Policy gradient lifelong learning setting -- 9.3.3 Objective function and optimization -- 9.3.4 Safe policy search for lifelong learning -- 9.3.5 Cross-domain lifelong reinforcement learning -- 9.4 Summary and evaluation datasets --
10. Conclusion and future directions -- Bibliography -- Authors' biographies.
Abstract: This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and metalearning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE817
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 159-186).

1. Introduction -- 1.1 Classic machine learning paradigm -- 1.2 Motivating examples -- 1.3 A brief history of lifelong learning -- 1.4 Definition of lifelong learning -- 1.5 Types of knowledge and key challenges -- 1.6 Evaluation methodology and role of big data -- 1.7 Outline of the book --

2. Related learning paradigms -- 2.1 Transfer learning -- 2.1.1 Structural correspondence learning -- 2.1.2 Naïve Bayes transfer classifier -- 2.1.3 Deep learning in transfer learning -- 2.1.4 Difference from lifelong learning -- 2.2 Multi-task learning -- 2.2.1 Task relatedness in multi-task learning -- 2.2.2 GO-MTL: multi-task learning using latent basis -- 2.2.3 Deep learning in multi-task learning -- 2.2.4 Difference from lifelong learning -- 2.3 Online learning -- 2.3.1 Difference from lifelong learning -- 2.4 Reinforcement learning -- 2.4.1 Difference from lifelong learning -- 2.5 Meta learning -- 2.5.1 Difference from lifelong learning -- 2.6 Summary --

3. Lifelong supervised learning -- 3.1 Definition and overview -- 3.2 Lifelong memory-based learning -- 3.2.1 Two memory-based learning methods -- 3.2.2 Learning a new representation for lifelong learning -- 3.3 Lifelong neural networks -- 3.3.1 MTL net -- 3.3.2 Lifelong EBNN -- 3.4 ELLA: an efficient lifelong learning algorithm -- 3.4.1 Problem setting -- 3.4.2 Objective function -- 3.4.3 Dealing with the first inefficiency -- 3.4.4 Dealing with the second inefficiency -- 3.4.5 Active task selection -- 3.5 Lifelong naive Bayesian classification -- 3.5.1 Naïve Bayesian text classification -- 3.5.2 Basic ideas of LSC -- 3.5.3 LSC technique -- 3.5.4 Discussions -- 3.6 Domain word embedding via meta-learning -- 3.7 Summary and evaluation datasets --

4. Continual learning and catastrophic forgetting -- 4.1 Catastrophic forgetting -- 4.2 Continual learning in neural networks -- 4.3 Learning without forgetting -- 4.4 Progressive neural networks -- 4.5 Elastic weight consolidation -- 4.6 iCaRL: incremental classifier and representation learning -- 4.6.1 Incremental training -- 4.6.2 Updating representation -- 4.6.3 Constructing exemplar sets for new classes -- 4.6.4 Performing classification in iCaRL -- 4.7 Expert gate -- 4.7.1 Autoencoder gate -- 4.7.2 Measuring task relatedness for training -- 4.7.3 Selecting the most relevant expert for testing -- 4.7.4 Encoder-based lifelong learning -- 4.8 Continual learning with generative replay -- 4.8.1 Generative adversarial networks -- 4.8.2 Generative replay -- 4.9 Evaluating catastrophic forgetting -- 4.10 Summary and evaluation datasets --

5. Open-world learning -- 5.1 Problem definition and applications -- 5.2 Center-based similarity space learning -- 5.2.1 Incrementally updating a CBS learning model -- 5.2.2 Testing a CBS learning model -- 5.2.3 CBS learning for unseen class detection -- 5.3 DOC: deep open classification -- 5.3.1 Feed-forward layers and the 1-vs.-rest layer -- 5.3.2 Reducing open-space risk -- 5.3.3 DOC for image classification -- 5.3.4 Unseen class discovery -- 5.4 Summary and evaluation datasets --

6. Lifelong topic modeling -- 6.1 Main ideas of lifelong topic modeling -- 6.2 LTM: a lifelong topic model -- 6.2.1 LTM model -- 6.2.2 Topic knowledge mining -- 6.2.3 Incorporating past knowledge -- 6.2.4 Conditional distribution of Gibbs sampler -- 6.3 AMC: a lifelong topic model for small data -- 6.3.1 Overall algorithm of AMC -- 6.3.2 Mining must-link knowledge -- 6.3.3 Mining cannot-link knowledge -- 6.3.4 Extended Pólya Urn model -- 6.3.5 Sampling distributions in Gibbs sampler -- 6.4 Summary and evaluation datasets --

7. Lifelong information extraction -- 7.1 NELL: a never-ending language learner -- 7.1.1 NELL architecture -- 7.1.2 Extractors and learning in NELL -- 7.1.3 Coupling constraints in NELL -- 7.2 Lifelong opinion target extraction -- 7.2.1 Lifelong learning through recommendation -- 7.2.2 AER algorithm -- 7.2.3 Knowledge learning -- 7.2.4 Recommendation using past knowledge -- 7.3 Learning on the job -- 7.3.1 Conditional random fields -- 7.3.2 General dependency feature -- 7.3.3 The L-CRF algorithm -- 7.4 Lifelong-RL: lifelong relaxation labeling -- 7.4.1 Relaxation labeling -- 7.4.2 Lifelong relaxation labeling -- 7.5 Summary and evaluation datasets --

8. Continuous knowledge learning in chatbots -- 8.1 LiLi: lifelong interactive learning and inference -- 8.2 Basic ideas of LiLi -- 8.3 Components of LiLi -- 8.4 A running example -- 8.5 Summary and evaluation datasets --

9. Lifelong reinforcement learning -- 9.1 Lifelong reinforcement learning through multiple environments -- 9.1.1 Acquiring and incorporating bias -- 9.2 Hierarchical Bayesian lifelong reinforcement learning -- 9.2.1 Motivation -- 9.2.2 Hierarchical Bayesian approach -- 9.2.3 MTRL algorithm -- 9.2.4 Updating hierarchical model parameters -- 9.2.5 Sampling an MDP -- 9.3 PG-ELLA: lifelong policy gradient reinforcement learning -- 9.3.1 Policy gradient reinforcement learning -- 9.3.2 Policy gradient lifelong learning setting -- 9.3.3 Objective function and optimization -- 9.3.4 Safe policy search for lifelong learning -- 9.3.5 Cross-domain lifelong reinforcement learning -- 9.4 Summary and evaluation datasets --

10. Conclusion and future directions -- Bibliography -- Authors' biographies.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and metalearning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields.

Also available in print.

Title from PDF title page (viewed on August 29, 2018).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha