Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Quantifying research integrity /

By: Seadle, Michael S 1950-, [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on information concepts, retrieval, and services: # 53.Publisher: [San Rafael, California] : Morgan & Claypool, 2017.Description: 1 PDF (xix, 121 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627059671.Subject(s): Research -- Moral and ethical aspects | Integrity | Experimental design | research integrity | plagiarism | data falsification | image manipulation | grayscale decisions | research fraud | detection tools | plagiarism tools | forensic droplets | Retraction Watch | Office of Research Integrity | HEADT CentreDDC classification: 174.95 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction -- 1.1 Overview -- 1.2 Context -- 1.3 Time -- 1.4 Images --
2. State of the art -- 2.1 Introduction -- 2.2 Legal issues -- 2.3 Ethics -- 2.3.1 Second-language students -- 2.3.2 Self-plagiarism -- 2.4 Prevention -- 2.4.1 Education -- 2.4.2 Detection as prevention -- 2.5 Detection tools -- 2.5.1 Plagiarism tools -- 2.5.2 iThenticate -- 2.5.3 Crowdsourcing -- 2.5.4 Image-manipulation tools -- 2.6 Replication --
3. Quantifying plagiarism -- 3.1 Overview -- 3.1.1 History -- 3.1.2 Definition -- 3.1.3 Pages and percents -- 3.1.4 Context, quotes, and references -- 3.1.5 Sentences, paragraphs, and other units -- 3.1.6 Self-plagiarism -- 3.2 In the humanities -- 3.2.1 Overview -- 3.2.2 Paragraph-length examples -- 3.2.3 Book-length examples -- 3.3 In the social sciences -- 3.3.1 Overview -- 3.3.2 Example 1 -- 3.3.3 Example 2 -- 3.4 In the natural sciences -- 3.4.1 Overview -- 3.4.2 Example 1 -- 3.4.3 Example 2 -- 3.5 Conclusion: plagiarism --
4. Quantifying data falsification -- 4.1 Introduction -- 4.2 Metadata -- 4.3 Humanities -- 4.3.1 Introduction -- 4.3.2 History -- 4.3.3 Art and art history -- 4.3.4 Ethnography -- 4.3.5 Literature -- 4.4 Social sciences -- 4.4.1 Introduction -- 4.4.2 Replication studies -- 4.4.3 Diederik Stapel -- 4.4.4 James Hunton -- 4.4.5 Database revisions -- 4.4.6 Data manipulation -- 4.5 Natural sciences -- 4.5.1 Introduction -- 4.5.2 Lab sciences -- 4.5.3 Medical sciences -- 4.5.4 Computing and statistics -- 4.5.5 Other non-lab sciences -- 4.6 Conclusion --
5. Quantifying image manipulation -- 5.1 Introduction -- 5.2 Digital imaging technology -- 5.2.1 Background -- 5.2.2 How a digital camera works -- 5.2.3 Raw format -- 5.2.4 Discovery analytics -- 5.2.5 Digital video -- 5.3 Arts and humanities -- 5.3.1 Introduction -- 5.3.2 Arts -- 5.3.3 Humanities -- 5.4 Social sciences and computing -- 5.4.1 Overview -- 5.4.2 Training and visualization -- 5.4.3 Standard manipulations -- 5.5 Biology -- 5.5.1 Legitimate manipulations -- 5.5.2 Illegitimate manipulations -- 5.6 Medicine -- 5.6.1 Limits -- 5.6.2 Case 1 -- 5.6.3 Case 2 -- 5.7 Other natural sciences -- 5.8 Detection tools and services -- 5.9 Conclusion --
6. Applying the metrics -- 6.1 Introduction -- 6.2 Detecting gray zones -- 6.3 Determining falsification -- 6.4 Prevention -- 6.5 Conclusion -- 6.6 HEADT Centre --
Bibliography -- Author's biography.
Abstract: Institutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation. Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated. This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start. The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE742
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 111-119).

1. Introduction -- 1.1 Overview -- 1.2 Context -- 1.3 Time -- 1.4 Images --

2. State of the art -- 2.1 Introduction -- 2.2 Legal issues -- 2.3 Ethics -- 2.3.1 Second-language students -- 2.3.2 Self-plagiarism -- 2.4 Prevention -- 2.4.1 Education -- 2.4.2 Detection as prevention -- 2.5 Detection tools -- 2.5.1 Plagiarism tools -- 2.5.2 iThenticate -- 2.5.3 Crowdsourcing -- 2.5.4 Image-manipulation tools -- 2.6 Replication --

3. Quantifying plagiarism -- 3.1 Overview -- 3.1.1 History -- 3.1.2 Definition -- 3.1.3 Pages and percents -- 3.1.4 Context, quotes, and references -- 3.1.5 Sentences, paragraphs, and other units -- 3.1.6 Self-plagiarism -- 3.2 In the humanities -- 3.2.1 Overview -- 3.2.2 Paragraph-length examples -- 3.2.3 Book-length examples -- 3.3 In the social sciences -- 3.3.1 Overview -- 3.3.2 Example 1 -- 3.3.3 Example 2 -- 3.4 In the natural sciences -- 3.4.1 Overview -- 3.4.2 Example 1 -- 3.4.3 Example 2 -- 3.5 Conclusion: plagiarism --

4. Quantifying data falsification -- 4.1 Introduction -- 4.2 Metadata -- 4.3 Humanities -- 4.3.1 Introduction -- 4.3.2 History -- 4.3.3 Art and art history -- 4.3.4 Ethnography -- 4.3.5 Literature -- 4.4 Social sciences -- 4.4.1 Introduction -- 4.4.2 Replication studies -- 4.4.3 Diederik Stapel -- 4.4.4 James Hunton -- 4.4.5 Database revisions -- 4.4.6 Data manipulation -- 4.5 Natural sciences -- 4.5.1 Introduction -- 4.5.2 Lab sciences -- 4.5.3 Medical sciences -- 4.5.4 Computing and statistics -- 4.5.5 Other non-lab sciences -- 4.6 Conclusion --

5. Quantifying image manipulation -- 5.1 Introduction -- 5.2 Digital imaging technology -- 5.2.1 Background -- 5.2.2 How a digital camera works -- 5.2.3 Raw format -- 5.2.4 Discovery analytics -- 5.2.5 Digital video -- 5.3 Arts and humanities -- 5.3.1 Introduction -- 5.3.2 Arts -- 5.3.3 Humanities -- 5.4 Social sciences and computing -- 5.4.1 Overview -- 5.4.2 Training and visualization -- 5.4.3 Standard manipulations -- 5.5 Biology -- 5.5.1 Legitimate manipulations -- 5.5.2 Illegitimate manipulations -- 5.6 Medicine -- 5.6.1 Limits -- 5.6.2 Case 1 -- 5.6.3 Case 2 -- 5.7 Other natural sciences -- 5.8 Detection tools and services -- 5.9 Conclusion --

6. Applying the metrics -- 6.1 Introduction -- 6.2 Detecting gray zones -- 6.3 Determining falsification -- 6.4 Prevention -- 6.5 Conclusion -- 6.6 HEADT Centre --

Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Institutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation. Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated. This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start. The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward.

Also available in print.

Title from PDF title page (viewed on January 24, 2017).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha