The Ultimate Hands-On Hadoop: Tame your Big Data!

The Ultimate Hands-On Hadoop: Tame your Big Data!

The course starts by helping you set up the Hortonworks Data Platform (HDP) Sandbox on your PC, allowing you to run Hadoop locally.

You’ll learn about the history of Hadoop and get an overview of its ecosystem, covering buzzwords like HDFS, MapReduce, Pig, Hive, and Spark.

Once you have Hadoop up and running, the course dives into using its core components - HDFS for distributed storage and MapReduce for parallel processing.

You’ll import real movie ratings data into HDFS and write Python MapReduce jobs to analyze it.

The examples are hands-on and interactive, like finding the most popular movies by number of ratings.

Next, you’ll learn how to use Pig Latin scripting to perform complex analyses on your data without writing low-level Java code.

You’ll find the oldest highly-rated movies and identify the most popular bad movie using Pig scripts.

The course then covers Apache Spark in depth, using its RDD APIs, DataFrames, and machine learning library (MLLib) to build movie recommendation engines.

You’ll see how Spark improves upon MapReduce for efficient data processing.

The syllabus covers integrating Hadoop with relational databases like MySQL using Sqoop, as well as using NoSQL databases like HBase, Cassandra, and MongoDB to store and query your processed data.

You’ll get hands-on practice importing and exporting data between these systems and Hadoop.

For interactive querying, you’ll use tools like Drill, Phoenix and Presto to join data across multiple databases on the fly.

The course explains how these differ in their approaches.

You’ll also learn about cluster management with components like YARN, Tez, Mesos, ZooKeeper and Oozie.

Streaming data ingestion is covered using Kafka and Flume.

Finally, you’ll build real-world systems for tasks like analyzing web logs and providing movie recommendations.

The examples are practical, and you’ll be able to follow along on your own Hadoop sandbox as you learn each new technology hands-on.

The course covers a wide breadth of the Hadoop ecosystem in-depth, making it a great pick if you are interested in data engineering using Hadoop.

DP-203 - Data Engineering on Microsoft Azure

DP-203 - Data Engineering on Microsoft Azure

The syllabus covers a wide range of topics, starting with the basics of cloud computing and Azure.

You’ll learn how to create an Azure account, navigate the Azure portal, and understand different data storage options like Azure Storage accounts, Azure SQL databases, and the modern Azure Data Lake Gen-2 storage.

The course even shows you how to connect applications to these data stores.

Moving on, you’ll dive deep into Transact-SQL (T-SQL) and learn about database internals, creating tables with keys, joins, and various clauses like SELECT, WHERE, ORDER BY, and aggregate functions.

This lays a solid foundation for working with Azure Synapse Analytics, a powerful analytics service that combines data warehousing and big data analytics.

The syllabus covers designing and implementing Azure Synapse workspaces, exploring compute options, working with external tables (CSV, Parquet), loading data using COPY and PolyBase, and even building a star schema data warehouse.

You’ll also learn about table distributions, indexing, partitioning, and slowly changing dimensions – essential concepts for optimizing performance.

A significant portion is dedicated to Azure Data Factory, Microsoft’s cloud-based ETL and data integration service.

You’ll learn how to build pipelines, use mapping data flows for data transformations, handle JSON data, implement self-hosted integration runtimes, and integrate with other Azure services like Event Hubs and Stream Analytics.

Speaking of real-time data processing, the course covers Azure Event Hubs (for ingesting streaming data) and Azure Stream Analytics (for real-time analytics on streaming data).

You’ll learn how to create Event Hubs, send/receive events using .NET, define Stream Analytics jobs, formulate queries, handle windowing functions, and even integrate with Power BI for visualizations.

The syllabus also introduces you to Scala programming language and using Jupyter Notebooks with Azure Synapse’s Spark pools for big data processing.

You’ll learn how to load data, perform transformations, and write data back to Azure Synapse using Scala and Python code.

Azure Databricks, a popular platform for running Apache Spark workloads, is also covered.

You’ll create Databricks workspaces, clusters, load data from various sources (including streaming from Event Hubs), perform transformations, and integrate with Azure Synapse.

Data security is a crucial aspect, and the course doesn’t disappoint.

You’ll learn about securing Azure Data Lake Gen-2 using account keys, shared access signatures, Azure Active Directory integration, and access control lists.

The syllabus also covers data encryption, masking, column/row-level security, and Azure AD authentication for Azure Synapse.

Finally, you’ll explore monitoring and optimization techniques for data storage and processing using services like Microsoft Purview (for data governance), Azure Monitor (for alerts and metrics), and workload management in Azure Synapse.

The course even touches on performance tuning aspects like access tiers, lifecycle policies, and transaction logging.

Apache Spark 3 - Spark Programming in Python for Beginners

Apache Spark 3 - Spark Programming in Python for Beginners

This course starts with the basics of Big Data and Data Lakes, explaining the significance of Hadoop’s evolution and introducing Apache Spark and Databricks Cloud.

It then guides you through setting up your development environment, whether you’re using Mac or Windows, ensuring you’re ready to write and run Spark code effectively.

The curriculum dives into Spark DataFrames and Spark SQL, teaching you how to manipulate and query data through practical examples.

You’ll gain insights into the Spark Execution Model and Architecture, learning about cluster managers and execution modes to optimize your applications.

The course also covers the Spark Programming Model, focusing on Spark Sessions, project configuration, and unit testing, preparing you for real-world development scenarios.

Advanced topics include working with Spark’s Structured API Foundation, understanding Data Sources and Sinks, and mastering data transformations and aggregations.

This knowledge equips you to handle various data processing tasks with ease.

The capstone project offers a chance to apply your skills in a comprehensive project, including Kafka integration and setting up CI/CD pipelines.

Quizzes and tests throughout the course help reinforce your learning, while bonus lectures and archived content provide additional resources.

By the end of this course, you’ll have a solid understanding of Apache Spark and the skills to tackle data processing challenges confidently.

DP-203: Data Engineering on Microsoft Azure - 2022

DP-203: Data Engineering on Microsoft Azure - 2022

The course begins with an introduction to cloud computing and the role of a data engineer.

You’ll learn about the evolution of this profession, the responsibilities it entails, and the technologies commonly used by data engineers.

This section provides a solid foundation for understanding the context in which you’ll be working.

Next, the course dives into data storage options on Azure, covering both non-relational and relational data stores.

You’ll explore Azure Blob Storage, Azure Data Lake Storage Gen2, Azure Table Storage, Azure Queue Storage, and Azure File Share Storage.

Additionally, you’ll learn about Azure Cosmos DB, a globally distributed, multi-model database service.

For relational data stores, the course covers Azure SQL Database, Azure Elastic Database, and Azure Synapse Analytics (formerly SQL Data Warehouse).

You’ll learn about provisioning, scaling, security, high availability, disaster recovery, and monitoring for these services.

The course then focuses on batch processing and stream processing, two essential components of data engineering.

You’ll learn how to design and develop data processing solutions using Azure Data Factory, Azure Databricks, and Azure Stream Analytics.

Hands-on demos will guide you through tasks like copying data, transforming data using Databricks notebooks, and processing real-time data streams.

Monitoring is a crucial aspect of data engineering, and the course covers monitoring data storage and data processing solutions.

You’ll learn how to use Azure Monitor to track the performance and health of your Azure resources.

Optimizing Azure data solutions is another key topic covered in the course.

You’ll learn techniques for troubleshooting data partitioning bottlenecks, optimizing data lake storage, stream analytics, and Azure Synapse Analytics.

Additionally, you’ll explore strategies for managing the data lifecycle.

The course also provides guidance on designing Azure data solutions, covering topics such as data types, storage types, architecture patterns, real-time processing, compute resource provisioning, and security considerations.

Throughout the course, you’ll have access to practice tests and quizzes to reinforce your learning and prepare for the DP-203 certification exam.

The course also includes additional study materials and resources to further enhance your understanding.

Data Engineering Essentials using SQL, Python, and PySpark

Data Engineering Essentials using SQL, Python, and PySpark

This course covers essential topics like SQL, Python, Hadoop, and Spark, making it ideal for both beginners and experienced professionals aiming to sharpen their data engineering skills.

You’ll start with SQL for Data Engineering, learning about database technologies, data warehouses, and advancing from basic to complex SQL queries.

The course then guides you through Python programming, from setting up your environment to mastering data processing with Pandas Dataframe APIs.

A significant focus of this course is on Apache Spark.

You’ll learn to set up a Databricks environment on the Google Cloud Platform (GCP), gaining hands-on experience in data processing with Spark SQL and PySpark.

This includes creating Delta tables, performing data transformations, aggregations, joins, and optimizing Spark applications for better performance.

Real-world projects like a File Format Converter and a Files to Database Loader offer practical experience, while sections on ELT data pipelines using Databricks provide insights into efficient data pipeline construction and operation.

Performance tuning is thoroughly covered, teaching you to read explain plans, identify bottlenecks, and apply optimization strategies.

Additionally, the course equips you with troubleshooting and debugging skills for SQL, Python, and Spark applications, preparing you to solve common development and deployment issues.

Apache Spark 3 - Spark Programming in Scala for Beginners

Apache Spark 3 - Spark Programming in Scala for Beginners

The course begins with an introduction to Big Data, Data Lakes, and Hadoop’s evolution, setting a strong foundation for understanding the significance of Apache Spark in processing large datasets efficiently.

It then guides you through setting up Spark in various environments, including command line, IntelliJ IDEA, and Databricks, preparing you for real-world scenarios.

You’ll learn about Spark’s execution model and architecture, gaining insights into distributed processing, execution modes, and how to optimize your Spark applications.

Key sections include in-depth exploration of the Spark Programming Model, where you’ll work with Spark Sessions, DataFrames, and understand debugging and unit testing.

This builds your capability to manipulate and analyze data effectively.

Advanced topics covered are RDDs, Datasets, DataFrames, and using Spark SQL for data analysis.

You’ll become proficient in reading and writing data in formats like CSV, JSON, and Parquet, and managing Spark SQL tables.

The course also dives into DataFrame and Dataset Transformations, Aggregations, and Joins, equipping you with the skills to perform complex data analysis and optimizations.

Quizzes throughout the course test your understanding, complemented by provided source code and resources for additional support.

Data Engineering using AWS Data Analytics

Data Engineering using AWS Data Analytics

The course starts by setting up the local development environment on Windows, Mac, and Cloud9 IDE.

You’ll learn to use AWS CLI, Python virtual environments, Jupyter Notebooks, and connect to AWS resources like EC2 instances using SSH.

It then dives into core AWS services like S3 for storage, covering concepts like version control, cross-region replication, storage classes, and managing buckets/objects using CLI and Python.

You’ll also learn IAM for security, creating users, roles, policies, and managing them programmatically.

For infrastructure provisioning, you’ll work with EC2 instances - launching them, connecting via SSH, understanding security groups, IP addresses, and the instance lifecycle.

You’ll even create custom Amazon Machine Images (AMIs) from existing instances.

The course covers data ingestion using AWS Lambda functions, where you’ll develop Python code to download data, use third-party libraries, and upload files to S3 incrementally using bookmarks.

You’ll then learn Apache Spark development using PySpark, creating Spark sessions, reading/writing data, and processing it using Spark APIs.

This is followed by an overview of AWS Glue components like crawlers, jobs, triggers, and integrating Spark UI for monitoring Glue jobs.

For big data processing, you’ll provision and work with Amazon EMR clusters, deploy Spark applications, and run them in client/cluster modes.

You’ll also build streaming pipelines using Kinesis Firehose and consume data from S3 using boto3.

The course covers AWS Athena for querying data in S3 using SQL, creating tables (partitioned/non-partitioned), and managing Athena resources via CLI and boto3.

You’ll provision Amazon Redshift clusters, copy data from S3, develop applications using Python, and optimize tables with distribution/sort keys.

Lastly, you’ll learn Redshift’s federated queries to integrate with RDS databases and use Redshift Spectrum to directly query data in S3 data lakes.

Azure Databricks and Spark SQL (Python)

Azure Databricks and Spark SQL (Python)

You’ll start by understanding the fundamentals of big data, Hadoop, and the Spark architecture.

The course then dives into setting up your Azure account and creating a Databricks service, guiding you through the user interface and cluster management.

Reading and writing data is a crucial aspect covered, including working with various file formats like Parquet.

You’ll learn how to analyze and transform data using SparkSQL, mastering techniques like filtering, sorting, string manipulation, and joining DataFrames.

The course introduces the Medallion Architecture, a structured approach to data processing.

Hands-on assignments reinforce your learning, such as transforming customer order data from bronze to silver and gold layers.

Visualizations and dashboards are explored, enabling you to present your insights effectively.

The syllabus covers integrating Databricks with Azure Data Lake Storage (ADLS), leveraging access keys, SAS tokens, and mounting ADLS to DBFS.

You’ll learn about the Hive Metastore, creating databases, tables (managed and external), and views.

The Delta Lake and Databricks Lakehouse concepts are introduced, empowering you to work with Delta Lake data files, perform updates, merges, and utilize table utility commands.

Modularizing code by linking notebooks, defining functions, and working with Python UDFs is also covered.

Streaming data processing is a key focus, with lessons on Spark Structured Streaming, simulating data streams, and using Auto Loader.

The cutting-edge Delta Live Tables feature is explored, enabling you to build and manage data pipelines with data quality checks.

Orchestrating tasks with Databricks Jobs, scheduling, and handling failures are covered.

Access control lists (ACLs) are explained, allowing you to manage user permissions and workspace access.

The course delves into the Databricks Command Line Interface (CLI), enabling you to interact with Databricks programmatically.

Source control with Databricks Repos and Azure DevOps is introduced, facilitating collaboration and version control.

Finally, you’ll learn about CI/CD (Continuous Integration/Continuous Deployment) with Databricks, setting up build pipelines, deploying to test and production environments, and leveraging parallelism for efficient workflows.

Throughout the course, you’ll work with Python, SQL, and the Databricks ecosystem, gaining practical experience in big data processing, data engineering, and cloud-based data solutions.

A Big Data Hadoop and Spark project for absolute beginners

A Big Data Hadoop and Spark project for absolute beginners

The course starts by introducing you to fundamental Big Data and Hadoop concepts like HDFS, MapReduce, and YARN.

You’ll get hands-on experience storing files in HDFS and querying data using Hive.

From there, it dives deep into Apache Spark, which is a powerful tool for data processing.

You’ll learn about RDDs, DataFrames, and Spark SQL, as well as how to use Spark for data transformation tasks.

The course even includes a project where you’ll apply your Hadoop and Spark skills to clean marketing data for a bank.

But the real strength of this course is its focus on practical, real-world skills.

You’ll learn how to set up development environments for Scala and Python, structure your code, implement logging and error handling, and write unit tests.

There are entire sections dedicated to creating data pipelines that integrate Hadoop, Spark, and databases like PostgreSQL.

The syllabus also covers more advanced topics like Spark Structured Streaming, reading configurations from files, and using AWS services like S3, Glue, and Athena to build a serverless data lake solution.

And with the addition of Databricks Delta Lake, you’ll gain insights into the modern data lakehouse architecture.

What’s impressive is how the course guides you through the entire data engineering journey, from basic concepts to building robust, production-ready data pipelines.

You’ll not only learn the technologies but also best practices for coding, testing, and deploying your data applications.

Data Engineering using Databricks on AWS and Azure

Data Engineering using Databricks on AWS and Azure

This course provides a comprehensive overview, covering everything from setting up your environment to deploying and running Spark applications on Databricks.

You’ll start by getting familiar with Databricks on Azure, including signing up for an account, creating a workspace, and launching clusters.

The course guides you through uploading data, creating notebooks, and developing and validating Spark applications using the Databricks UI.

Next, you’ll dive into AWS essentials like setting up the AWS CLI, configuring IAM users and roles, and managing S3 buckets and objects.

You’ll learn how to integrate AWS services like S3 and Glue Catalog with Databricks, granting necessary permissions and mounting S3 buckets onto your clusters.

The course also covers setting up a local development environment on Windows and Mac, installing tools like Python, Boto3, and Jupyter Lab.

You’ll learn to integrate IDEs like PyCharm with Databricks Connect, enabling seamless development and debugging.

Moving on, you’ll explore the Databricks CLI, interacting with file systems, managing clusters, and submitting jobs programmatically.

The course walks you through the Spark application development lifecycle, from setting up virtual environments to reading, processing, and writing data using Spark APIs.

You’ll gain hands-on experience deploying and running Spark applications on Databricks, including refactoring code, building deployable bundles, and running jobs via the web UI and CLI.

The course covers modularizing notebooks and running jobs from them.

Additionally, you’ll dive deep into Delta Lake, learning to create, update, delete, and merge data using both Spark DataFrames and SQL.

You’ll also learn about accessing cluster terminals via web and SSH, installing software using init scripts, and working with Spark Structured Streaming for incremental loads.

The course concludes with an overview of Databricks SQL clusters, including running queries, analyzing data, and loading data into Delta tables using COPY commands.