Organizations use their data to support and influence decisions and build data–intensive products and services, such as recommendation, prediction, and diagnostic systems. The collection of skills required by organizations to support these functions has been grouped under the term ’data science’. This statistics and data analysis course will attempt to articulate the expected output of data scientists and then teach students how to use PySpark (part of Spark) to deliver against these expectations. The course assignments include log mining, textual entity recognition, and collaborative filtering exercises that teach students how to manipulate data sets using parallel processing with PySpark. This course covers advanced undergraduate–level material. It requires a programming background and experience with Python (or the ability to learn it quickly). All exercises will use PySpark (the Python API for Spark), and previous experience with Spark equivalent to Introduction to Apache Spark, is required.
Instructor Details
Courses : 2
Specification: Big Data Analysis with Apache Spark
|
6 reviews for Big Data Analysis with Apache Spark
Add a review Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Price | Free |
---|---|
Provider | |
Duration | 30 hours |
Year | 2021 |
Level | Intermediate |
Language | English |
Certificate | Yes |
Quizzes | No |
FREE
Anoop Toffy –
It was nice course. I loved it.
Good Intro PySpark API.
Nice set of Problem set.
As a part of it, if you are lucky you will get access to Databricks clouds
Gaurav Srivastva –
Lectures are very light in content and disappointing but the labs are good and do require students to investigate and complete them.
Gregory J Hamel ( Life Is Study) –
CS100.1x Introduction to Big Data with Apache Spark is a 5 week intro to distributed computing offered by UC Berkeley through the edX MOOC platform focused on teaching students how to perform large scale computation using Apache Spark. The assignments use PySpark, Spark’s Python API, so some familiarity with Python programming is necessary. You don’t need prior exposure to big data or distributed computing to take the course. Grades are based on four programming labs (80%), easy comprehension questions that allow unlimited attempts (12%) and setup of the course virtual machine used to complete the labs (8%).
Course lectures in to Big Data with Apache Spark are relatively brief and tend to stay at a high level, discussing general big data concepts rather than the details of Apache Spark. The instructor does a fine job in the few lectures the course offers, but there were not enough of them and they often felt disconnected from the assignments. The fifth week had no lectures.
The labs are the core of this course. While you can breeze through weekly lectures in half an hour or less, each of the four labs are lengthy reading and programming assignments packaged in IPython notebooks. Expect to spend 2 to 4 hours on labs 1, 2 and 4 and 3 to 6 hours on lab 3. The labs start by teaching basic Apache Spark manipulations and move on to some text analysis and machine learning. Using the IPython notebook to deliver labs is a convenient way to intermingle text and instructions with code. On the other hand, each exercise tends to depend on code executed somewhere above it, so a mistake made on earlier exercise can lead to some odd errors later on and Spark’s error traces aren’t particularly helpful. The course does provide some basic tests for each exercise, but it is easy to arrive at solutions that pass the checks but cause errors later on. The course forums on Piazza are a vital resource for troubleshooting and disambiguation; I imagine some of the snags will be resolved in future offerings. Despite the occasional hiccups, the labs do a good job familiarizing students with Apache Spark’s Resilient Distributed Dataset objects and the various transformations and actions you can perform with them.
Introduction to Big Data with Apache Spark is a great place to start learning about distributed computing if you know some Python. Although the lectures don’t add much technical depth to the course, they provide some big picture background that will be useful for students who have little prior exposure to big data concepts. The labs give you adequate opportunity to get your hands dirty with Apache Spark to gain basic familiarity with data manipulations it offers. UC Berkley is offering a follow up course “Scalable Machine Learning” that builds on the foundation laid in CS100.1x.
I give this course 4 out of 5 stars: Very Good.
Charlie Soliman –
This is an excellent course for beginners to the world of Spark but it would be a good idea to have some programming knowledge in Python as well as basic understanding of what big data means. The problem sets are organized methodically with much explanation so even if you don’t know much statistics you can still follow with the programming. I’m no statistician but managed to go through all problem sets with few mistakes. It certainly was fun on top of being educational and informative.
Martin Strandbygaard –
Overall a good course, that is worthwhile spending the time on, if you want to get familiar with spark and the map reduce programming model.
The lecture videos and quizzes are pretty lightweight, and nothing spectacular. However, I found the assignments really well structured, interesting, and informative. They use IPython notebook which I found to be a really awesome format for this kind of course and assignments.
The course is not heave on mathematics and statistics, but the assignments will challenge you to really understand the stated problems, and the map reduce programming model, to successfully complete them.
Wendao Liu –
Slightly disappointed by the content, not very informative. if u wanna learn more about spark, u definitely need explore more material.