000 06521nam a2200757 i 4500
001 7385400
003 IEEE
005 20200413152920.0
006 m eo d
007 cr cn |||m|||a
008 160122s2016 caua foab 000 0 eng d
020 _a9781627058445
_qebook
020 _z9781627058438
_qprint
024 7 _a10.2200/S00690ED1V01Y201512SPT015
_2doi
035 _a(CaBNVSL)swl00406110
035 _a(OCoLC)935806387
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aHF5548.37
_b.D653 2016
082 0 4 _a658.478
_223
100 1 _aDomingo-Ferrer, Josep.,
_eauthor.
245 1 0 _aDatabase anonymization :
_bprivacy models, data utility, and microaggregation-based inter-model connections /
_cJosep Domingo-Ferrer, David Sánchez, and Jordi Soria-Comas.
264 1 _aSan Rafael, California (1537 Fourth Street, San Rafael, CA 94901 USA) :
_bMorgan & Claypool,
_c2016.
300 _a1 PDF (xv, 120 pages) :
_billustrations.
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aSynthesis lectures on information security, privacy, and trust,
_x1945-9750 ;
_v# 15
538 _aMode of access: World Wide Web.
538 _aSystem requirements: Adobe Acrobat Reader.
500 _aPart of: Synthesis digital library of engineering and computer science.
504 _aIncludes bibliographical references (pages 109-118).
505 0 _a1. Introduction --
505 8 _a2. Privacy in data releases -- 2.1 Types of data releases -- 2.2 Microdata sets -- 2.3 Formalizing privacy -- 2.4 Disclosure risk in microdata sets -- 2.5 Microdata anonymization -- 2.6 Measuring information loss -- 2.7 Trading off information loss and disclosure risk -- 2.8 Summary --
505 8 _a3. Anonymization methods for microdata -- 3.1 Non-perturbative masking methods -- 3.2 Perturbative masking methods -- 3.3 Synthetic data generation -- 3.4 Summary --
505 8 _a4. Quantifying disclosure risk: record linkage -- 4.1 Threshold-based record linkage -- 4.2 Rule-based record linkage -- 4.3 Probabilistic record linkage -- 4.4 Summary --
505 8 _a5. The k-anonymity privacy model -- 5.1 Insufficiency of data de-identification -- 5.2 The k-anonymity model -- 5.3 Generalization and suppression based k-anonymity -- 5.4 Microaggregation-based k-anonymity -- 5.5 Probabilistic k-anonymity -- 5.6 Summary --
505 8 _a6. Beyond k-anonymity: l-diversity and t -closeness -- 6.1 l-diversity -- 6.2 t-closeness -- 6.3 Summary --
505 8 _a7. t-closeness through microaggregation -- 7.1 Standard microaggregation and merging -- 7.2 t-closeness aware microaggregation: k-anonymity-first -- 7.3 t-closeness aware microaggregation: t-closeness-first -- 7.4 Summary --
505 8 _a8. Differential privacy -- 8.1 Definition -- 8.2 Calibration to the global sensitivity -- 8.3 Calibration to the smooth sensitivity -- 8.4 The exponential mechanism -- 8.5 Relation to k-anonymity-based models -- 8.6 Differentially private data publishing -- 8.7 Summary --
505 8 _a9. Differential privacy by multivariate microaggregation -- 9.1 Reducing sensitivity via prior multivariate microaggregation -- 9.2 Differentially private data sets by insensitive microaggregation -- 9.3 General insensitive microaggregation -- 9.4 Differential privacy with categorical attributes -- 9.5 A semantic distance for differential privacy -- 9.6 Integrating heterogeneous attribute types -- 9.7 Summary --
505 8 _a10. Differential privacy by individual ranking microaggregation -- 10.1 Limitations of multivariate microaggregation -- 10.2 Sensitivity reduction via individual ranking -- 10.3 Choosing the microggregation parameter k -- 10.4 Summary --
505 8 _a11. Conclusions and research directions -- 11.1 Summary and conclusions -- 11.2 Research directions -- Bibliography -- Authors' biographies.
506 1 _aAbstract freely available; full-text restricted to subscribers or individual document purchasers.
510 0 _aCompendex
510 0 _aINSPEC
510 0 _aGoogle scholar
510 0 _aGoogle book search
520 3 _aThe current social and economic context increasingly demands open data to improve scientific research and decision making. However, when published data refer to individual respondents, disclosure risk limitation techniques must be implemented to anonymize the data and guarantee by design the fundamental right to privacy of the subjects the data refer to. Disclosure risk limitation has a long record in the statistical and computer science research communities, who have developed a variety of privacy-preserving solutions for data releases. This Synthesis Lecture provides a comprehensive overview of the fundamentals of privacy in data releases focusing on the computer science perspective. Specifically, we detail the privacy models, anonymization methods, and utility and risk metrics that have been proposed so far in the literature. Besides, as a more advanced topic, we identify and discuss in detail connections between several privacy models (i.e., how to accumulate the privacy guarantees they offer to achieve more robust protection and when such guarantees are equivalent or complementary); we also explore the links between anonymization methods and privacy models (how anonymization methods can be used to enforce privacy models and thereby offer ex ante privacy guarantees). These latter topics are relevant to researchers and advanced practitioners, who will gain a deeper understanding on the available data anonymization solutions and the privacy guarantees they can offer.
530 _aAlso available in print.
588 _aTitle from PDF title page (viewed on January 22, 2016).
650 0 _aData protection.
650 0 _aDatabase security.
653 _adata releases
653 _aprivacy protection
653 _aanonymization
653 _aprivacy models
653 _astatistical disclosure limitation
653 _astatistical disclosure control
653 _amicroaggregation
700 1 _aSánchez, David.,
_eauthor.
700 1 _aSoria-Comas, Jordi.,
_eauthor.
776 0 8 _iPrint version:
_z9781627058438
830 0 _aSynthesis digital library of engineering and computer science.
830 0 _aSynthesis lectures on information security, privacy, and trust ;
_v# 15.
_x1945-9750
856 4 2 _3Abstract with links to resource
_uhttp://ieeexplore.ieee.org/servlet/opac?bknumber=7385400
999 _c562181
_d562181