Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Information retrieval evaluation

By: Harman, D. K. (Donna K.).
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on information concepts, retrieval, and services: # 19.Publisher: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, c2011Description: 1 electronic text (x, 107 p.) : digital file.ISBN: 9781598299724 (electronic bk.).Subject(s): Information retrieval -- Evaluation | Information storage and retrieval systems -- Evaluation | Evaluation | Test collections | Information retrieval | Cranfield paradigm | TRECDDC classification: 025.04 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction and early history -- Introduction -- The Cranfield tests -- The MEDLARS evaluation -- The SMART system and early test collections -- The Comparative Systems Laboratory at Case Western University -- Cambridge and the "Ideal" Test Collection -- Additional work in metrics up to 1992 --
2. "Batch" Evaluation Since 1992 -- 2.1. Introduction -- 2.2. The TREC evaluations -- 2.3. The TREC ad hoc tests (1992-1999) -- Building the ad hoc collections -- Analysis of the ad hoc collections -- The TREC ad hoc metrics -- 2.4. Other TREC retrieval tasks -- Retrieval from "noisy" text -- Retrieval of non-English documents -- Very large corpus, web retrieval, and enterprise searching -- Domain-specific retrieval tasks -- Pushing the limits of the Cranfield model -- 2.5. Other evaluation campaigns -- NTCIR -- CLEF -- INEX -- 2.6. Further work in metrics -- 2.7. Some advice on using, building and evaluating test collections -- Using existing collections -- Subsetting or modifying existing collections -- Building and evaluating new ad hoc collections -- Dealing with unusual data -- Building web data collections --
3. Interactive Evaluation -- Introduction -- Early work -- Interactive evaluation in TREC -- Case studies of interactive evaluation -- Interactive evaluation using log data --
4. Conclusion -- Introduction -- Some thoughts on how to design an experiment -- Some recent issues in evaluation of information retrieval -- A personal look at some future challenges -- Bibliography -- Author's biography.
Abstract: Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE352
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 87-105).

1. Introduction and early history -- Introduction -- The Cranfield tests -- The MEDLARS evaluation -- The SMART system and early test collections -- The Comparative Systems Laboratory at Case Western University -- Cambridge and the "Ideal" Test Collection -- Additional work in metrics up to 1992 --

2. "Batch" Evaluation Since 1992 -- 2.1. Introduction -- 2.2. The TREC evaluations -- 2.3. The TREC ad hoc tests (1992-1999) -- Building the ad hoc collections -- Analysis of the ad hoc collections -- The TREC ad hoc metrics -- 2.4. Other TREC retrieval tasks -- Retrieval from "noisy" text -- Retrieval of non-English documents -- Very large corpus, web retrieval, and enterprise searching -- Domain-specific retrieval tasks -- Pushing the limits of the Cranfield model -- 2.5. Other evaluation campaigns -- NTCIR -- CLEF -- INEX -- 2.6. Further work in metrics -- 2.7. Some advice on using, building and evaluating test collections -- Using existing collections -- Subsetting or modifying existing collections -- Building and evaluating new ad hoc collections -- Dealing with unusual data -- Building web data collections --

3. Interactive Evaluation -- Introduction -- Early work -- Interactive evaluation in TREC -- Case studies of interactive evaluation -- Interactive evaluation using log data --

4. Conclusion -- Introduction -- Some thoughts on how to design an experiment -- Some recent issues in evaluation of information retrieval -- A personal look at some future challenges -- Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain.

Also available in print.

Title from PDF t.p. (viewed on June 18, 2011).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha