Big Data Analytics Using Spark
About this Course
In data science, data is called "big" if it cannot fit into the memory of a single standard laptop or workstation. The analysis of big datasets requires using a cluster of tens, hundreds or thousands of computers. Effectively using such clusters requires the use of distributed files systems, such as the Hadoop Distributed File System (HDFS) and corresponding computational models, such as Hadoop, MapReduce and Spark. In this course, part of the Data Science MicroMasters program, you will learn what the bottlenecks are in massive parallel computation and how to use spark to minimize these bottlenecks. You will learn how to perform supervised an unsupervised machine learning on massive datasets using the Machine Learning Library (MLlib). In this course, as in the other ones in this MicroMasters program, you will gain hands-on experience using PySpark within the Jupyter notebooks environment.Created by: The University of California, San Diego
Level: Advanced

Related Online Courses
Basics of Bayesian Data Analysis Using R is part one of the Bayesian Data Analysis in R professional certificate. Bayesian approach is becoming increasingly popular in all fields of data analysis,... more
Este curso te permitirá desarrollar habilidades como un tomador de decisiones con base a las siguientes competencias: análisis de elementos estadístico de la información conceptos y fun... more
This course discusses properties and applications of random variables. When you’re done, you’ll have enough firepower to undertake a wide variety of modeling and analysis problems; and you’ll be we... more
Today the principles and techniques of reproducible research are more important than ever, across diverse disciplines from astrophysics to political science. No one wants to do research that... more
Every single minute, computers across the world collect millions of gigabytes of data. What can you do to make sense of this mountain of data? How do data scientists use this data for the... more