DiscoverDataScience.org

  • Programs
    • Bachelor’s in Data Science Programs
      • Data Science Minors
    • Master’s in Data Science Programs
    • Data Science PhD Programs
    • Data Science Certification Programs
    • Data Science Associate Degrees
    • Data Science Bootcamps
    • MBA in Data Science/Analytics
  • Online
    • Online Master’s in Data Science Programs
    • Online Master’s in Data Analytics Programs
    • Online Master’s in Business Analytics Programs
    • Online Master’s in Information Systems
    • Online Master’s in Health Informatics Programs
  • Resources
    • Data Science Job Guide
    • Guide to a Career in Analytics
    • 2021 Salary Guide to Careers in Data Science
    • Income Sharing Agreement Guide
    • GRE Prep Guide
    • Data Science and Sustainability
    • Guide to A Career in Health Informatics
    • Guide to Geographic Information System (GIS) Careers
    • Kids STEM Guide
    • Women in STEM
    • Minorities in STEM Guide
    • STEM Scholarship Guide
    • Big Data Internship Tips
    • Data Science in High Schools
    • Career with Numbers
    • Data Science and Libraries
    • Math Help Guide
    • Applying for a Big Data PhD
  • Related Programs
    • Master’s in Business Analytics Programs
    • Master’s in Data Analytics Programs
    • Master’s in Information Systems Programs
    • Master’s in Health Informatics Programs
    • Ph.D. Programs in Information Systems
    • Ph.D. in Health Informatics Programs
    • Sports Analytics Degree Programs
    • GIS Degree Programs
    • Accounting Analytics Degree Programs
    • Actuarial Science Degree Programs
    • Cyber Security Degree Programs
    • Data Analytics and Visualization Programs

The Data Scientist’s Toolkit: Hadoop

By Kat Campise, Data Scientist, Ph.D.

Just about everything we do on a daily basis generates data. Whether we stroll through the aisles at our local grocery store or send a quick text message to a friend or family member, some activity or digital thought sharing is making its way from you to someone’s database. All enterprises are now in the data (and tech) industry, and this is true even if it’s a small restaurant or a lone freelancer trying to attract additional clientele. On a larger scale, massive data collection, processing, and analysis require equally substantial storage and computational resources. Thus, we now have Hadoop.

What is Hadoop?

Hadoop is a software ecosystem used for data storage and computation. Created by Doug Cutting, a software designer, and Mike Cafarella, a computer scientist, Hadoop was initially based on a paper entitled, “The Google File System.” Therefore, it could be accurately stated that Google spawned the Hadoop idea — which isn’t a far stretch since they manage one of the most extensive datasets in the world: a search engine. Initially released in 2011, Hadoop’s seven-year evolution as a distributed storage and processing system has blossomed into a vast ecosystem which includes but is not limited to:

  • Apache Pig: scripting language similar to SQL that is used in conjunction with Hadoop and used instead of writing code in Java; it’s used for data analysis tasks.
  • Apache Spark: is a clustered computational framework that provides distributed data processing for more complex tasks such as machine learning.
  • Apache Hive: provides for the use of SQL for data querying, summarization, analysis, and exploration.
  • Apache Flume: data collection software that can handle massive amounts of streaming data; Flume is used for data ingestion.
  • Yarn: used for resource management and job scheduling.

Other than the basic architecture (described in more detail below), Hadoop has a plethora of open source software utilities which includes NoSQL and NewSQL databases, distributed programming frameworks, data ingestion software, and a variety of data visualization capabilities (e.g., Hadoop software can be used in conjunction with R, Tableau, and SAS Visual Analytics). Thus, Hadoop is a comprehensive protocol with many interchangeable utilities.

Basic Hadoop Architecture

In the beginning, before the explosion in the number of software utilities available for Hadoop integration, there were two primary Hadoop building modules: Hadoop Distributed File System (HDFS) and MapReduce. The HDFS system is relatively self-explanatory; it distributes datasets across what’s known as “commodity hardware” (low cost and low-performance computers).

If we think in terms of a social media giant such as Facebook, which is perpetually collecting and managing enormous amounts of data, HDFS provides the infrastructure needed for computation and storage which is “shared” (or distributed) across the commodity hardware. It’s similar to working on a collaborative project where each member of the group is tasked with aggregating information about a topic; thus, the workload is reduced. One of Hadoop’s strengths is its ability to scale up or down, so as more resources are needed, the Hadoop system can handle the data volume fluctuation. HDFS is the base layer of the entire Hadoop ecosystem.

MapReduce is frequently defined as a programming model that acts as a go-between for HDFS and the rest of the Hadoop system. In one sense, it can be viewed as a data project manager that splits the data into smaller pieces and distributes the fragments to computer clusters for parallel processing (which means that all of the data pieces from the original dataset are processed simultaneously).

Who Uses Hadoop?

Many of the tech behemoths utilize Hadoop. Among the most recognizable Hadoop user base are:

  • Amazon
  • Alibaba
  • eBay
  • Hulu
  • LinkedIn

Google formerly used MapReduce — which makes sense since they created MapReduce over a decade ago. However, Google is not known for resting on its laurels, so the tech brainiacs switched to deploying Cloud Dataflow in lieu of MapReduce. Regardless of Google’s transition, plenty of enterprises continue to use the Hadoop system (including MapReduce). Thus, learning how to use Hadoop’s core functions along with its additional software integrations continues to be valuable knowledge for a data scientist.

Where to Get Started with Hadoop

Often, when jumping into a new career, there exists a “chicken and egg” problem: you need the experience to be considered for the job, but having on the job experience is primarily the way to gain the required knowledge. Fortunately, for just about everything data science related, we are in the open source age where would-be learners can find tutorials and massive online open courses (MOOCs) that are freely available (or learners can earn certificates for a small-ish fee).

  • Coursera offers a Big Data specialization that includes a specific course on the Hadoop Platform. Big Data Essentials, Big Data Analysis, and Data Science at Scale also provide ample information and practice for Hadoop software. Learners can either audit the courses for free or pay to access all course materials (for some courses, auditing doesn’t include completing quizzes and assignments).
  • edX has courses in Big Data and an Introduction to Apache Hadoop offered by the Linux Foundation. Most, if not all, of the edX course offerings can be completed free of charge. Learners won’t receive a certificate, but they’ll still have access to the video lectures and other course materials.
  • Udemy has a selection of Hadoop offerings that are either free or at a reasonable cost. Since these are courses created by individuals rather than official academic institutions, the mileage may vary in terms of overall quality.
  • Udacity has partnered with Cloudera to offer an Intro to Hadoop and MapReduce course at no cost to the learner: it’s absolutely free. However, they recommend that learners have at least introductory knowledge of computer science.

It’s important to note that being a data scientist requires curiosity. While data science is an emerging industry, it’s situated at the nexus of at least two sectors that are continually evolving: business and technology. Both sectors are driven by innovation, and the insight data scientists derive from the vast data pools (or data oceans) is fundamental to an enterprise’s progress. The courses above are only the beginning of the lifelong learning that is data science.

  • Career Guides
  • Data Scientist
  • Data Analyst
  • Data Architect
  • Data Engineer
  • Business Analyst
  • Marketing Analyst
  • Data And Analytics Manager
  • Business Intelligence Analyst
  • Data Mining Specialist
  • Statistician
  • Machine Learning Engineer
  • Database Administrator
  • Database Developer
  • Data Science Toolkit
  • Hadoop
  • Hive
  • Java
  • Python
  • R
  • SAS
  • SQL
  • Tableau
  • Data Science Articles
  • Artificial Intelligence as a Trending Field
  • Data Mining vs. Machine Learning
  • Business Analyst vs. Data Scientist
  • Data Scientist vs. Software Engineer
  • Data Science vs. Computer Science
  • Data Engineer vs. Data Scientist
  • Data Analyst vs. Data Scientist
  • Data Science vs. Statistics
  • Data Science in Health Care
  • Guide to a Career in Criminal Intelligence
  • What is Data Analytics?
  • What is Business Analytics?
  • What is Quantum Machine Learning?
  • What is Predictive Analytics?
  • Guide to Geographic Information System (GIS) Careers
  • Guide to A Career in Health Informatics
  • Data Science Ph.D.
  • Expert Interview: Dr. Sudipta Dasmohapatra
  • Expert Interview: Sandra Altman
  • Expert Interview: Tony Johnson
  • Expert Interview: Bob Muenchen
  • Industries Using Data Science
  • Artificial Intelligence
  • Biotechnology
  • Finance
  • Health Care
  • Insurance
  • Law Enforcement
  • Logistics
  • Marketing and Advertising
  • Sports
  • Clean Energy

© Copyright 2021 https://www.discoverdatascience.org · All Rights Reserved

  • Home
  • About Us
  • Privacy Policy
  • Terms of Use
We use cookies to ensure that we give you the best experience on our website. OkNo