The Data Scientist’s Toolkit: HadoopBy Kat Campise, Data Scientist, Ph.D. Just about everything we do on a daily basis generates data. Whether we stroll through the aisles at our local grocery store or send a quick text message to a friend or family member, some activity or digital thought sharing is making its way from you to someone’s database. All enterprises are now in the data (and tech) industry, and this is true even if it’s a small restaurant or a lone freelancer trying to attract additional clientele. On a larger scale, massive data collection, processing, and analysis require equally substantial storage and computational resources. Thus, we now have Hadoop.
George Mason University
Saint Mary's University of Minnesota
Featured Program: Online M.S. in Business Intelligence & Data Analytics
George Mason University
University of Bath
Featured Program: Business Analytics online MSc; Artificial Intelligence online MSc
What is Hadoop?Hadoop is a software ecosystem used for data storage and computation. Created by Doug Cutting, a software designer, and Mike Cafarella, a computer scientist, Hadoop was initially based on a paper entitled, “The Google File System.” Therefore, it could be accurately stated that Google spawned the Hadoop idea — which isn’t a far stretch since they manage one of the most extensive datasets in the world: a search engine. Initially released in 2011, Hadoop’s seven-year evolution as a distributed storage and processing system has blossomed into a vast ecosystem which includes but is not limited to:
- Apache Pig: scripting language similar to SQL that is used in conjunction with Hadoop and used instead of writing code in Java; it’s used for data analysis tasks.
- Apache Spark: is a clustered computational framework that provides distributed data processing for more complex tasks such as machine learning.
- Apache Hive: provides for the use of SQL for data querying, summarization, analysis, and exploration.
- Apache Flume: data collection software that can handle massive amounts of streaming data; Flume is used for data ingestion.
- Yarn: used for resource management and job scheduling.
Basic Hadoop ArchitectureIn the beginning, before the explosion in the number of software utilities available for Hadoop integration, there were two primary Hadoop building modules: Hadoop Distributed File System (HDFS) and MapReduce. The HDFS system is relatively self-explanatory; it distributes datasets across what’s known as “commodity hardware” (low cost and low-performance computers). If we think in terms of a social media giant such as Facebook, which is perpetually collecting and managing enormous amounts of data, HDFS provides the infrastructure needed for computation and storage which is “shared” (or distributed) across the commodity hardware. It’s similar to working on a collaborative project where each member of the group is tasked with aggregating information about a topic; thus, the workload is reduced. One of Hadoop’s strengths is its ability to scale up or down, so as more resources are needed, the Hadoop system can handle the data volume fluctuation. HDFS is the base layer of the entire Hadoop ecosystem. MapReduce is frequently defined as a programming model that acts as a go-between for HDFS and the rest of the Hadoop system. In one sense, it can be viewed as a data project manager that splits the data into smaller pieces and distributes the fragments to computer clusters for parallel processing (which means that all of the data pieces from the original dataset are processed simultaneously).
Who Uses Hadoop?Many of the tech behemoths utilize Hadoop. Among the most recognizable Hadoop user base are:
Where to Get Started with HadoopOften, when jumping into a new career, there exists a “chicken and egg” problem: you need the experience to be considered for the job, but having on the job experience is primarily the way to gain the required knowledge. Fortunately, for just about everything data science related, we are in the open source age where would-be learners can find tutorials and massive online open courses (MOOCs) that are freely available (or learners can earn certificates for a small-ish fee).
- Coursera offers a Big Data specialization that includes a specific course on the Hadoop Platform. Big Data Essentials, Big Data Analysis, and Data Science at Scale also provide ample information and practice for Hadoop software. Learners can either audit the courses for free or pay to access all course materials (for some courses, auditing doesn’t include completing quizzes and assignments).
- edX has courses in Big Data and an Introduction to Apache Hadoop offered by the Linux Foundation. Most, if not all, of the edX course offerings can be completed free of charge. Learners won’t receive a certificate, but they’ll still have access to the video lectures and other course materials.
- Udemy has a selection of Hadoop offerings that are either free or at a reasonable cost. Since these are courses created by individuals rather than official academic institutions, the mileage may vary in terms of overall quality.
- Udacity has partnered with Cloudera to offer an Intro to Hadoop and MapReduce course at no cost to the learner: it’s absolutely free. However, they recommend that learners have at least introductory knowledge of computer science.