• LOGIN
    • No products in the cart.

Overview:

Apache Spark is a lightning fast cluster computing designed for fast computation. Spark executes in memory data processing and runs much faster than Hadoop Map Reduce. Learners will get trained in-depth spark concepts with Scala programming and its components such as Spark Streaming, Spark SQL, Spark RDD, Spark MLlib and Spark Graphx.


Objective:

  • After completing the Apache Spark training, you will be able to:
  • Understand Scala and its implementation
  • Install Spark and implement Spark operations on Spark Shell
  • Understand the role of Spark RDD
  • Implement Spark applications on YARN (Hadoop)
  • Learn Spark Streaming API
  • Implement machine learning algorithms in Spark MLlib API
  • Analyse Hive and Spark SQL architecture
  • Understand Spark Graphx API and implement graph algorithms
  • Project

Audience:

  • Professionals aspiring to work on Big Data Analytics.
  • Spark Developers
  • Data Scientist

Prerequisites:

Basic knowledge of big data, HDFS, any programming language like java, python, etc. but it is not mandatory.


Share this...
Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Course Curriculum

Introduction to Spark Getting started
Resilient Distributed Dataset and Data Frames
Spark application programming
Introduction to Spark Eco System (Spark SQL)
Spark Streaming
Spark Mlib
Spark Graphx

Course Reviews

4

ratings
  • 1 stars0
  • 2 stars0
  • 3 stars0
  • 4 stars0
  • 5 stars0

No Reviews found for this course.

top
All Rights Reserved © 2016.  Powered By