000 | 05156nam a2200685 i 4500 | ||
---|---|---|---|
001 | 8910671 | ||
003 | IEEE | ||
005 | 20200413152934.0 | ||
006 | m eo d | ||
007 | cr bn |||m|||a | ||
008 | 191127s2020 caua fob 000 0 eng d | ||
020 |
_a9781627058711 _qelectronic |
||
020 |
_z9781681736372 _qhardcover |
||
020 |
_z9781627056380 _qpaperback |
||
024 | 7 |
_a10.2200/S00941ED2V01Y201907CNT022 _2doi |
|
035 | _a(CaBNVSL)thg00979755 | ||
035 | _a(OCoLC)1129092706 | ||
040 |
_aCaBNVSL _beng _erda _cCaBNVSL _dCaBNVSL |
||
050 | 4 |
_aQ325.5 _b.Z536 2020eb |
|
082 | 0 | 4 |
_a006.3/1 _223 |
100 | 1 |
_aZhao, Qing _c(Ph.D. in electrical engineering), _eauthor. |
|
245 | 1 | 0 |
_aMulti-armed bandits : _btheory and applications to online learning in networks / _cQing Zhao. |
264 | 1 |
_a[San Rafael, California] : _bMorgan & Claypool, _c[2020] |
|
300 |
_a1 PDF (xviii, 147 pages) : _billustrations. |
||
336 |
_atext _2rdacontent |
||
337 |
_aelectronic _2isbdmedia |
||
338 |
_aonline resource _2rdacarrier |
||
490 | 1 |
_aSynthesis lectures on communication networks , _x1935-4193 ; _v#22 |
|
538 | _aMode of access: World Wide Web. | ||
538 | _aSystem requirements: Adobe Acrobat Reader. | ||
500 | _aPart of: Synthesis digital library of engineering and computer science. | ||
504 | _aIncludes bibliographical references (pages 127-145). | ||
505 | 0 | _a1. Introduction -- 1.1. Multi-armed bandit problems -- 1.2. An essential conflict : exploration vs. Exploitation -- 1.3. Two formulations : Bayesian and frequentist -- 1.4. Notation | |
505 | 8 | _a2. Bayesian bandit model and Gittins index -- 2.1. Markov decision processes -- 2.2. The Bayesian bandit model -- 2.3. Gittins index -- 2.4. Optimality of the Gittins index policy -- 2.5. Computing Gittins index -- 2.6. Semi-Markov bandit processes | |
505 | 8 | _a3. Variants of the Bayesian bandit model -- 3.1. Necessary assumptions for the index theorem -- 3.2. Variations in the action space -- 3.3. Variations in the system dynamics -- 3.4. Variations in the reward structure -- 3.5. Variations in performance measure | |
505 | 8 | _a4. Frequentist bandit model -- 4.1. Basic formulations and regret measures -- 4.2. Lower bounds on regret -- 4.3. Online learning algorithms -- 4.4. Connections between Bayesian and frequentist bandit models | |
505 | 8 | _a5. Variants of the frequentist bandit model -- 5.1. Variations in the reward model -- 5.2. Variations in the action space -- 5.3. Variations in the observation model -- 5.4. Variations in the performance measure -- 5.5. Learning in context : bandits with side information -- 5.6. Learning under competition : bandits with multiple players | |
505 | 8 | _a6. Application examples -- 6.1. Communication and computer networks -- 6.2. Social-economic networks. | |
506 | _aAbstract freely available; full-text restricted to subscribers or individual document purchasers. | ||
510 | 0 | _aCompendex | |
510 | 0 | _aINSPEC | |
510 | 0 | _aGoogle scholar | |
510 | 0 | _aGoogle book search | |
520 | _aMulti-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other. | ||
530 | _aAlso available in print. | ||
588 | _aTitle from PDF title page (viewed on November 27, 2019). | ||
650 | 0 | _aMachine learning. | |
650 | 0 | _aReinforcement learning. | |
653 | _amulti-armed bandit | ||
653 | _amachine learning | ||
653 | _aonline learning | ||
653 | _areinforcement learning | ||
653 | _aMarkov decision processes | ||
655 | 0 | _aElectronic books. | |
776 | 0 | 8 |
_iPrint version: _z9781627056380 _z9781681736372 |
830 | 0 | _aSynthesis digital library of engineering and computer science. | |
830 | 0 |
_aSynthesis lectures on communication networks ; _v#22. |
|
856 | 4 | 0 |
_3Abstract with links to full text _uhttps://doi.org/10.2200/S00941ED2V01Y201907CNT022 |
856 | 4 | 2 |
_3Abstract with links to resource _uhttps://ieeexplore.ieee.org/servlet/opac?bknumber=8910671 |
999 |
_c562450 _d562450 |