Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Multi-armed bandits : : theory and applications to online learning in networks /

By: Zhao, Qing (Ph.D. in electrical engineering) [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on communication networks: #22.Publisher: [San Rafael, California] : Morgan & Claypool, [2020]Description: 1 PDF (xviii, 147 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627058711.Subject(s): Machine learning | Reinforcement learning | multi-armed bandit | machine learning | online learning | reinforcement learning | Markov decision processesGenre/Form: Electronic books.DDC classification: 006.3/1 Online resources: Abstract with links to full text | Abstract with links to resource Also available in print.
Contents:
1. Introduction -- 1.1. Multi-armed bandit problems -- 1.2. An essential conflict : exploration vs. Exploitation -- 1.3. Two formulations : Bayesian and frequentist -- 1.4. Notation
2. Bayesian bandit model and Gittins index -- 2.1. Markov decision processes -- 2.2. The Bayesian bandit model -- 2.3. Gittins index -- 2.4. Optimality of the Gittins index policy -- 2.5. Computing Gittins index -- 2.6. Semi-Markov bandit processes
3. Variants of the Bayesian bandit model -- 3.1. Necessary assumptions for the index theorem -- 3.2. Variations in the action space -- 3.3. Variations in the system dynamics -- 3.4. Variations in the reward structure -- 3.5. Variations in performance measure
4. Frequentist bandit model -- 4.1. Basic formulations and regret measures -- 4.2. Lower bounds on regret -- 4.3. Online learning algorithms -- 4.4. Connections between Bayesian and frequentist bandit models
5. Variants of the frequentist bandit model -- 5.1. Variations in the reward model -- 5.2. Variations in the action space -- 5.3. Variations in the observation model -- 5.4. Variations in the performance measure -- 5.5. Learning in context : bandits with side information -- 5.6. Learning under competition : bandits with multiple players
6. Application examples -- 6.1. Communication and computer networks -- 6.2. Social-economic networks.
Summary: Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE950
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 127-145).

1. Introduction -- 1.1. Multi-armed bandit problems -- 1.2. An essential conflict : exploration vs. Exploitation -- 1.3. Two formulations : Bayesian and frequentist -- 1.4. Notation

2. Bayesian bandit model and Gittins index -- 2.1. Markov decision processes -- 2.2. The Bayesian bandit model -- 2.3. Gittins index -- 2.4. Optimality of the Gittins index policy -- 2.5. Computing Gittins index -- 2.6. Semi-Markov bandit processes

3. Variants of the Bayesian bandit model -- 3.1. Necessary assumptions for the index theorem -- 3.2. Variations in the action space -- 3.3. Variations in the system dynamics -- 3.4. Variations in the reward structure -- 3.5. Variations in performance measure

4. Frequentist bandit model -- 4.1. Basic formulations and regret measures -- 4.2. Lower bounds on regret -- 4.3. Online learning algorithms -- 4.4. Connections between Bayesian and frequentist bandit models

5. Variants of the frequentist bandit model -- 5.1. Variations in the reward model -- 5.2. Variations in the action space -- 5.3. Variations in the observation model -- 5.4. Variations in the performance measure -- 5.5. Learning in context : bandits with side information -- 5.6. Learning under competition : bandits with multiple players

6. Application examples -- 6.1. Communication and computer networks -- 6.2. Social-economic networks.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.

Also available in print.

Title from PDF title page (viewed on November 27, 2019).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha