Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Lifelong machine learning /

By: Chen, Zhiyuan (Computer scientist) [author.].
Contributor(s): Liu, Bing 1963-, [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on artificial intelligence and machine learning: # 33.Publisher: [San Rafael, California] : Morgan & Claypool, 2017.Description: 1 PDF (xvii, 127 pages).Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627058773.Subject(s): Machine learning | lifelong machine learning | lifelong learning; learning with memory | cumulative learning | multi-task learning | transfer learningDDC classification: 006.31 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction -- 1.1 A brief history of lifelong learning -- 1.2 Definition of lifelong learning -- 1.3 Lifelong learning system architecture -- 1.4 Evaluation methodology -- 1.5 Role of big data in lifelong learning -- 1.6 Outline of the book --
2. Related learning paradigms -- 2.1 Transfer learning -- 2.1.1 Structural correspondence learning -- 2.1.2 Naïve Bayes transfer classifier -- 2.1.3 Deep learning in transfer learning -- 2.1.4 Difference from lifelong learning -- 2.2 Multi-task learning -- 2.2.1 Task relatedness in multi-task learning -- 2.2.2 GO-MTL: multi-task learning using latent basis -- 2.2.3 Deep learning in multi-task learning -- 2.2.4 Difference from lifelong learning -- 2.3 Online learning -- 2.3.1 Difference from lifelong learning -- 2.4 Reinforcement learning -- 2.4.1 Difference from lifelong learning -- 2.5 Summary --
3. Lifelong supervised learning -- 3.1 Definition and overview -- 3.2 Lifelong memory-based learning -- 3.2.1 Two memory-based learning methods -- 3.2.2 Learning a new representation for lifelong learning -- 3.3 Lifelong neural networks -- 3.3.1 MTL Net -- 3.3.2 Lifelong EBNN -- 3.4 Cumulative learning and self-motivated learning -- 3.4.1 Training a cumulative learning model -- 3.4.2 Testing a cumulative learning model -- 3.4.3 Open world learning for unseen class detection -- 3.5 ELLA: an efficient lifelong learning algorithm -- 3.5.1 Problem setting -- 3.5.2 Objective function -- 3.5.3 Dealing with the first inefficiency -- 3.5.4 Dealing with the second inefficiency -- 3.5.5 Active task selection -- 3.6 LSC: lifelong sentiment classification -- 3.6.1 Naïve Bayesian text classification -- 3.6.2 Basic ideas of LSC -- 3.6.3 LSC technique -- 3.7 Summary and evaluation datasets --
4. Lifelong unsupervised learning -- 4.1 Lifelong topic modeling -- 4.2 LTM: a lifelong topic model -- 4.2.1 LTM model -- 4.2.2 Topic knowledge mining -- 4.2.3 Incorporating past knowledge -- 4.2.4 Conditional distribution of Gibbs sampler -- 4.3 AMC: a lifelong topic model for small data -- 4.3.1 Overall algorithm of AMC -- 4.3.2 Mining must-link knowledge -- 4.3.3 Mining cannot-link knowledge -- 4.3.4 Extended Pólya Urn model -- 4.3.5 Sampling distributions in Gibbs sampler -- 4.4 Lifelong information extraction -- 4.4.1 Lifelong learning through recommendation -- 4.4.2 AER algorithm -- 4.4.3 Knowledge learning -- 4.4.4 Recommendation using past knowledge -- 4.5 Lifelong-RL: lifelong relaxation labeling -- 4.5.1 Relaxation labeling -- 4.5.2 Lifelong relaxation labeling -- 4.6 Summary and evaluation datasets --
5. Lifelong semi-supervised learning for information extraction -- 5.1 NELL: a never ending language learner -- 5.2 NELL architecture -- 5.3 Extractors and learning in NELL -- 5.4 Coupling constraints in NELL -- 5.5 Summary --
6. Lifelong reinforcement learning -- 6.1 Lifelong reinforcement learning through multiple environments -- 6.1.1 Acquiring and incorporating bias -- 6.2 Hierarchical Bayesian lifelong reinforcement learning -- 6.2.1 Motivation -- 6.2.2 Hierarchical Bayesian approach -- 6.2.3 MTRL algorithm -- 6.2.4 Updating hierarchical model parameters -- 6.2.5 Sampling an MDP -- 6.3 PG-ELLA: lifelong policy gradient reinforcement learning -- 6.3.1 Policy gradient reinforcement learning -- 6.3.2 Policy gradient lifelong learning setting -- 6.3.3 Objective function and optimization -- 6.3.4 Safe policy search for lifelong learning -- 6.3.5 Cross-domain lifelong reinforcement learning -- 6.4 Summary and evaluation datasets --
7. Conclusion and future directions -- Bibliography -- Authors' biographies.
Abstract: Lifelong Machine Learning (or Lifelong Learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Lifelong learning aims to achieve this capability. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to new heights. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE730
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 111-125).

1. Introduction -- 1.1 A brief history of lifelong learning -- 1.2 Definition of lifelong learning -- 1.3 Lifelong learning system architecture -- 1.4 Evaluation methodology -- 1.5 Role of big data in lifelong learning -- 1.6 Outline of the book --

2. Related learning paradigms -- 2.1 Transfer learning -- 2.1.1 Structural correspondence learning -- 2.1.2 Naïve Bayes transfer classifier -- 2.1.3 Deep learning in transfer learning -- 2.1.4 Difference from lifelong learning -- 2.2 Multi-task learning -- 2.2.1 Task relatedness in multi-task learning -- 2.2.2 GO-MTL: multi-task learning using latent basis -- 2.2.3 Deep learning in multi-task learning -- 2.2.4 Difference from lifelong learning -- 2.3 Online learning -- 2.3.1 Difference from lifelong learning -- 2.4 Reinforcement learning -- 2.4.1 Difference from lifelong learning -- 2.5 Summary --

3. Lifelong supervised learning -- 3.1 Definition and overview -- 3.2 Lifelong memory-based learning -- 3.2.1 Two memory-based learning methods -- 3.2.2 Learning a new representation for lifelong learning -- 3.3 Lifelong neural networks -- 3.3.1 MTL Net -- 3.3.2 Lifelong EBNN -- 3.4 Cumulative learning and self-motivated learning -- 3.4.1 Training a cumulative learning model -- 3.4.2 Testing a cumulative learning model -- 3.4.3 Open world learning for unseen class detection -- 3.5 ELLA: an efficient lifelong learning algorithm -- 3.5.1 Problem setting -- 3.5.2 Objective function -- 3.5.3 Dealing with the first inefficiency -- 3.5.4 Dealing with the second inefficiency -- 3.5.5 Active task selection -- 3.6 LSC: lifelong sentiment classification -- 3.6.1 Naïve Bayesian text classification -- 3.6.2 Basic ideas of LSC -- 3.6.3 LSC technique -- 3.7 Summary and evaluation datasets --

4. Lifelong unsupervised learning -- 4.1 Lifelong topic modeling -- 4.2 LTM: a lifelong topic model -- 4.2.1 LTM model -- 4.2.2 Topic knowledge mining -- 4.2.3 Incorporating past knowledge -- 4.2.4 Conditional distribution of Gibbs sampler -- 4.3 AMC: a lifelong topic model for small data -- 4.3.1 Overall algorithm of AMC -- 4.3.2 Mining must-link knowledge -- 4.3.3 Mining cannot-link knowledge -- 4.3.4 Extended Pólya Urn model -- 4.3.5 Sampling distributions in Gibbs sampler -- 4.4 Lifelong information extraction -- 4.4.1 Lifelong learning through recommendation -- 4.4.2 AER algorithm -- 4.4.3 Knowledge learning -- 4.4.4 Recommendation using past knowledge -- 4.5 Lifelong-RL: lifelong relaxation labeling -- 4.5.1 Relaxation labeling -- 4.5.2 Lifelong relaxation labeling -- 4.6 Summary and evaluation datasets --

5. Lifelong semi-supervised learning for information extraction -- 5.1 NELL: a never ending language learner -- 5.2 NELL architecture -- 5.3 Extractors and learning in NELL -- 5.4 Coupling constraints in NELL -- 5.5 Summary --

6. Lifelong reinforcement learning -- 6.1 Lifelong reinforcement learning through multiple environments -- 6.1.1 Acquiring and incorporating bias -- 6.2 Hierarchical Bayesian lifelong reinforcement learning -- 6.2.1 Motivation -- 6.2.2 Hierarchical Bayesian approach -- 6.2.3 MTRL algorithm -- 6.2.4 Updating hierarchical model parameters -- 6.2.5 Sampling an MDP -- 6.3 PG-ELLA: lifelong policy gradient reinforcement learning -- 6.3.1 Policy gradient reinforcement learning -- 6.3.2 Policy gradient lifelong learning setting -- 6.3.3 Objective function and optimization -- 6.3.4 Safe policy search for lifelong learning -- 6.3.5 Cross-domain lifelong reinforcement learning -- 6.4 Summary and evaluation datasets --

7. Conclusion and future directions -- Bibliography -- Authors' biographies.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Lifelong Machine Learning (or Lifelong Learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Lifelong learning aims to achieve this capability. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to new heights. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning.

Also available in print.

Title from PDF title page (viewed on December 5, 2016).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha