Library Logo
Normal view MARC view ISBD view

Data simplification : taming information with open source tools / [electronic resource]

by Berman, Jules J [author.].
Material type: materialTypeLabelBookPublisher: Cambridge, MA : Morgan Kaufmann is an imprint of Elsevier, 2016.Description: 1 online resource.ISBN: 9780128038543; 0128038543; 0128037814; 9780128037812.Subject(s): Open source software | Data mining | Database management | COMPUTERS -- Programming -- Open Source | COMPUTERS -- Software Development & Engineering -- Tools | COMPUTERS -- Software Development & Engineering -- General | Data mining | Database management | Open source software | Electronic booksOnline resources: ScienceDirect
Contents:
Front cover; Data Simplification: Taming Information With Open Source Tools; Copyright; Dedication; Contents; Foreword; Preface; Organization of this book; Chapter Organization; How to Read this Book; Nota Bene; Glossary; References; Author Biography; Chapter 1: The Simple Life; 1.1. Simplification Drives Scientific Progress; 1.2. The Human Mind is a Simplifying Machine; 1.3. Simplification in Nature; 1.4. The Complexity Barrier; 1.5. Getting Ready; Open Source Tools; Perl; Python; Ruby; Text Editors; OpenOffice; LibreOffice; Command Line Utilities; Cygwin, Linux Emulation for Windows.
DOS Batch ScriptsLinux Bash Scripts; Interactive Line Interpreters; Package Installers; System Calls; Glossary; References; Chapter 2: Structuring Text; 2.1. The Meaninglessness of Free Text; 2.2. Sorting Text, the Impossible Dream; 2.3. Sentence Parsing; 2.4. Abbreviations; 2.5. Annotation and the Simple Science of Metadata; 2.6. Specifications Good, Standards Bad; Open Source Tools; ASCII; Regular Expressions; Format Commands; Converting Nonprintable Files to Plain-Text; Dublin Core; Glossary; References; Chapter 3: Indexing Text; 3.1. How Data Scientists Use Indexes.
3.2. Concordances and Indexed Lists3.3. Term Extraction and Simple Indexes; 3.4. Autoencoding and Indexing with Nomenclatures; 3.5. Computational Operations on Indexes; Open Source Tools; Word Lists; Doublet Lists; Ngram Lists; Glossary; References; Chapter 4: Understanding Your Data; 4.1. Ranges and Outliers; 4.2. Simple Statistical Descriptors; 4.3. Retrieving Image Information; 4.4. Data Profiling; 4.5. Reducing Data; Open Source Tools; Gnuplot; MatPlotLib; R, for Statistical Programming; Numpy; Scipy; ImageMagick; Displaying Equations in LaTex; Normalized Compression Distance.
Pearson's CorrelationThe Ridiculously Simple Dot Product; Glossary; References; Chapter 5: Identifying and Deidentifying Data; 5.1. Unique Identifiers; 5.2. Poor Identifiers, Horrific Consequences; 5.3. Deidentifiers and Reidentifiers; 5.4. Data Scrubbing; 5.5. Data Encryption and Authentication; 5.6. Timestamps, Signatures, and Event Identifiers; Open Source Tools; Pseudorandom Number Generators; UUID; Encryption and Decryption with OpenSSL; One-Way Hash Implementations; Steganography; Glossary; References; Chapter 6: Giving Meaning to Data; 6.1. Meaning and Triples.
6.2. Driving Down Complexity With Classifications6.3. Driving Up Complexity With Ontologies; 6.4. The Unreasonable Effectiveness of Classifications; 6.5. Properties That Cross Multiple Classes; Open Source Tools; Syntax for Triples; RDF Schema; RDF Parsers; Visualizing Class Relationships; Glossary; References; Chapter 7: Object-oriented Data; 7.1. The Importance of Self-Explaining Data; 7.2. Introspection and Reflection; 7.3. Object-Oriented Data Objects; 7.4. Working With Object-Oriented Data; Open Source Tools; Persistent Data; SQLite Databases; Glossary; References.
Summary: Data Simplification: Taming Information With Open Source Tools addresses the simple fact that modern data is too big and complex to analyze in its native form. Data simplification is the process whereby large and complex data is rendered usable. Complex data must be simplified before it can be analyzed, but the process of data simplification is anything but simple, requiring a specialized set of skills and tools. This book provides data scientists from every scientific discipline with the methods and tools to simplify their data for immediate analysis or long-term storage in a form that can be readily repurposed or integrated with other data. Drawing upon years of practical experience, and using numerous examples and use cases, Jules Berman discusses the principles, methods, and tools that must be studied and mastered to achieve data simplification, open source tools, free utilities and snippets of code that can be reused and repurposed to simplify data, natural language processing and machine translation as a tool to simplify data, and data summarization and visualization and the role they play in making data useful for the end user.
Tags from this library: No tags from this library for this title. Add tag(s)
Log in to add tags.
    average rating: 0.0 (0 votes)
No physical items for this record

Includes bibliographical references and index.

Online resource; title from PDF title page (EBSCO, viewed March 21, 2016).

Data Simplification: Taming Information With Open Source Tools addresses the simple fact that modern data is too big and complex to analyze in its native form. Data simplification is the process whereby large and complex data is rendered usable. Complex data must be simplified before it can be analyzed, but the process of data simplification is anything but simple, requiring a specialized set of skills and tools. This book provides data scientists from every scientific discipline with the methods and tools to simplify their data for immediate analysis or long-term storage in a form that can be readily repurposed or integrated with other data. Drawing upon years of practical experience, and using numerous examples and use cases, Jules Berman discusses the principles, methods, and tools that must be studied and mastered to achieve data simplification, open source tools, free utilities and snippets of code that can be reused and repurposed to simplify data, natural language processing and machine translation as a tool to simplify data, and data summarization and visualization and the role they play in making data useful for the end user.

Front cover; Data Simplification: Taming Information With Open Source Tools; Copyright; Dedication; Contents; Foreword; Preface; Organization of this book; Chapter Organization; How to Read this Book; Nota Bene; Glossary; References; Author Biography; Chapter 1: The Simple Life; 1.1. Simplification Drives Scientific Progress; 1.2. The Human Mind is a Simplifying Machine; 1.3. Simplification in Nature; 1.4. The Complexity Barrier; 1.5. Getting Ready; Open Source Tools; Perl; Python; Ruby; Text Editors; OpenOffice; LibreOffice; Command Line Utilities; Cygwin, Linux Emulation for Windows.

DOS Batch ScriptsLinux Bash Scripts; Interactive Line Interpreters; Package Installers; System Calls; Glossary; References; Chapter 2: Structuring Text; 2.1. The Meaninglessness of Free Text; 2.2. Sorting Text, the Impossible Dream; 2.3. Sentence Parsing; 2.4. Abbreviations; 2.5. Annotation and the Simple Science of Metadata; 2.6. Specifications Good, Standards Bad; Open Source Tools; ASCII; Regular Expressions; Format Commands; Converting Nonprintable Files to Plain-Text; Dublin Core; Glossary; References; Chapter 3: Indexing Text; 3.1. How Data Scientists Use Indexes.

3.2. Concordances and Indexed Lists3.3. Term Extraction and Simple Indexes; 3.4. Autoencoding and Indexing with Nomenclatures; 3.5. Computational Operations on Indexes; Open Source Tools; Word Lists; Doublet Lists; Ngram Lists; Glossary; References; Chapter 4: Understanding Your Data; 4.1. Ranges and Outliers; 4.2. Simple Statistical Descriptors; 4.3. Retrieving Image Information; 4.4. Data Profiling; 4.5. Reducing Data; Open Source Tools; Gnuplot; MatPlotLib; R, for Statistical Programming; Numpy; Scipy; ImageMagick; Displaying Equations in LaTex; Normalized Compression Distance.

Pearson's CorrelationThe Ridiculously Simple Dot Product; Glossary; References; Chapter 5: Identifying and Deidentifying Data; 5.1. Unique Identifiers; 5.2. Poor Identifiers, Horrific Consequences; 5.3. Deidentifiers and Reidentifiers; 5.4. Data Scrubbing; 5.5. Data Encryption and Authentication; 5.6. Timestamps, Signatures, and Event Identifiers; Open Source Tools; Pseudorandom Number Generators; UUID; Encryption and Decryption with OpenSSL; One-Way Hash Implementations; Steganography; Glossary; References; Chapter 6: Giving Meaning to Data; 6.1. Meaning and Triples.

6.2. Driving Down Complexity With Classifications6.3. Driving Up Complexity With Ontologies; 6.4. The Unreasonable Effectiveness of Classifications; 6.5. Properties That Cross Multiple Classes; Open Source Tools; Syntax for Triples; RDF Schema; RDF Parsers; Visualizing Class Relationships; Glossary; References; Chapter 7: Object-oriented Data; 7.1. The Importance of Self-Explaining Data; 7.2. Introspection and Reflection; 7.3. Object-Oriented Data Objects; 7.4. Working With Object-Oriented Data; Open Source Tools; Persistent Data; SQLite Databases; Glossary; References.

There are no comments for this item.

Log in to your account to post a comment.
Last Updated on September 15, 2019
© Dhaka University Library. All Rights Reserved|Staff Login