Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Statistical relational artificial intelligence : : logic, probability, and computation /

By: Raedt, Luc de 1964-, [author.].
Contributor(s): Kersting, Kristian [author.] | Natarajan, Sriraam [author.] | Poole, David L 1958-, [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on artificial intelligence and machine learning: # 32.Publisher: San Rafael, California (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, 2016.Description: 1 PDF (xiv, 175 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627058421.Subject(s): Artificial intelligence -- Computer simulation | Logic -- Computer simulation | probabilistic logic models | relational probabilistic models | lifted inference | statistical relational learning | probabilistic programming | inductive logic programming | logic programming | machine learning | Prolog | Problog | Markov logic networksDDC classification: 006.3 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Motivation -- 1.1 Uncertainty in complex worlds -- 1.2 Challenges of understanding StarAI -- 1.3 The benefits of mastering StarAI -- 1.4 Applications of StarAI -- 1.5 Brief historical overview --
Part I. Representations -- 2. Statistical and relational AI representations -- 2.1 Probabilistic graphical models -- 2.1.1 Bayesian networks -- 2.1.2 Markov networks and factor graphs -- 2.2 First-order logic and logic programming --
3. Relational probabilistic representations -- 3.1 A general view: parameterized probabilistic models -- 3.2 Two example representations: Markov logic and ProbLog -- 3.2.1 Undirected relational model: Markov logic -- 3.2.2 Directed relational models: ProbLog --
4. Representational issues -- 4.1 Knowledge representation formalisms -- 4.2 Objectives for representation language -- 4.3 Directed vs. undirected models -- 4.4 First-order logic vs. logic programs -- 4.5 Factors and formulae -- 4.6 Parameterizing atoms -- 4.7 Aggregators and combining rules -- 4.8 Open universe models -- 4.8.1 Identity uncertainty -- 4.8.2 Existence uncertainty -- 4.8.3 Ontologies --
Part II. Inference -- 5. Inference in propositional models -- 5.1 Probabilistic inference -- 5.1.1 Variable elimination -- 5.1.2 Recursive conditioning -- 5.1.3 Belief propagation -- 5.2 Logical inference -- 5.2.1 Propositional logic, satisfiability, and weighted model counting -- 5.2.2 Semiring inference -- 5.2.3 The least Herbrand model -- 5.2.4 Grounding -- 5.2.5 Proving --
6. Inference in relational probabilistic models -- 6.1 Grounded inference for relational probabilistic models -- 6.1.1 Weighted model counting -- 6.1.2 WMC for Markov logic -- 6.1.3 WMC for ProbLog -- 6.1.4 Knowledge compilation -- 6.2 Lifted inference: exploiting symmetries -- 6.2.1 Exact lifted inference -- 6.3 (Lifted) approximate inference --
Part III. Learning -- 7. Learning probabilistic and logical models -- 7.1 Learning probabilistic models -- 7.1.1 Fully observed data and known structure -- 7.1.2 Partially observed data with known structure -- 7.1.3 Unknown structure and parameters -- 7.2 Logical and relational learning -- 7.2.1 Two learning settings -- 7.2.2 The search space -- 7.2.3 Two algorithms: clausal discovery and FOIL -- 7.2.4 From propositional to first-order logic -- 7.2.5 An ILP example --
8. Learning probabilistic relational models -- 8.1 Learning as inference -- 8.2 The learning problem -- 8.2.1 The data used -- 8.3 Parameter learning of relational models -- 8.3.1 Fully observable data -- 8.3.2 Partially observed data -- 8.3.3 Learning with latent variables -- 8.4 Structure learning of probabilistic relational models -- 8.4.1 A vanilla structure learning approach -- 8.4.2 Probabilistic relational models -- 8.4.3 Boosting -- 8.5 Bayesian learning -- Part IV. Beyond probabilities --
9. Beyond basic probabilistic inference and learning -- 9.1 Lifted satisfiability -- 9.2 Acting in noisy relational worlds -- 9.3 Relational optimization --
10. Conclusions -- Bibliography -- Authors' biographies -- Index.
Abstract: An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE698
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 139-167) and index.

1. Motivation -- 1.1 Uncertainty in complex worlds -- 1.2 Challenges of understanding StarAI -- 1.3 The benefits of mastering StarAI -- 1.4 Applications of StarAI -- 1.5 Brief historical overview --

Part I. Representations -- 2. Statistical and relational AI representations -- 2.1 Probabilistic graphical models -- 2.1.1 Bayesian networks -- 2.1.2 Markov networks and factor graphs -- 2.2 First-order logic and logic programming --

3. Relational probabilistic representations -- 3.1 A general view: parameterized probabilistic models -- 3.2 Two example representations: Markov logic and ProbLog -- 3.2.1 Undirected relational model: Markov logic -- 3.2.2 Directed relational models: ProbLog --

4. Representational issues -- 4.1 Knowledge representation formalisms -- 4.2 Objectives for representation language -- 4.3 Directed vs. undirected models -- 4.4 First-order logic vs. logic programs -- 4.5 Factors and formulae -- 4.6 Parameterizing atoms -- 4.7 Aggregators and combining rules -- 4.8 Open universe models -- 4.8.1 Identity uncertainty -- 4.8.2 Existence uncertainty -- 4.8.3 Ontologies --

Part II. Inference -- 5. Inference in propositional models -- 5.1 Probabilistic inference -- 5.1.1 Variable elimination -- 5.1.2 Recursive conditioning -- 5.1.3 Belief propagation -- 5.2 Logical inference -- 5.2.1 Propositional logic, satisfiability, and weighted model counting -- 5.2.2 Semiring inference -- 5.2.3 The least Herbrand model -- 5.2.4 Grounding -- 5.2.5 Proving --

6. Inference in relational probabilistic models -- 6.1 Grounded inference for relational probabilistic models -- 6.1.1 Weighted model counting -- 6.1.2 WMC for Markov logic -- 6.1.3 WMC for ProbLog -- 6.1.4 Knowledge compilation -- 6.2 Lifted inference: exploiting symmetries -- 6.2.1 Exact lifted inference -- 6.3 (Lifted) approximate inference --

Part III. Learning -- 7. Learning probabilistic and logical models -- 7.1 Learning probabilistic models -- 7.1.1 Fully observed data and known structure -- 7.1.2 Partially observed data with known structure -- 7.1.3 Unknown structure and parameters -- 7.2 Logical and relational learning -- 7.2.1 Two learning settings -- 7.2.2 The search space -- 7.2.3 Two algorithms: clausal discovery and FOIL -- 7.2.4 From propositional to first-order logic -- 7.2.5 An ILP example --

8. Learning probabilistic relational models -- 8.1 Learning as inference -- 8.2 The learning problem -- 8.2.1 The data used -- 8.3 Parameter learning of relational models -- 8.3.1 Fully observable data -- 8.3.2 Partially observed data -- 8.3.3 Learning with latent variables -- 8.4 Structure learning of probabilistic relational models -- 8.4.1 A vanilla structure learning approach -- 8.4.2 Probabilistic relational models -- 8.4.3 Boosting -- 8.5 Bayesian learning -- Part IV. Beyond probabilities --

9. Beyond basic probabilistic inference and learning -- 9.1 Lifted satisfiability -- 9.2 Acting in noisy relational worlds -- 9.3 Relational optimization --

10. Conclusions -- Bibliography -- Authors' biographies -- Index.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks.

Also available in print.

Title from PDF title page (viewed on April 14, 2016).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha