In this Capstone you’ll demonstrate your ability to perform like a Data Engineer. Your mission is to design, implement, and manage a complete data and analytics platform consisting of relational and non-relational databases, data warehouses, data pipelines, big data processing engines, and Business Intelligence (BI) tools.
This Capstone project will require that you apply and sharpen the skills and knowledge you developed in the various courses in the IBM Data Engineering Professional Certificate and utilize multiple tools and technologies to design databases, collect data from multiple sources, extract, transform and load data into a data warehouse, and utilize a cloud-based BI tool to create analytic reports and visualizations. You will also implement predictive analytics and machine learning models using big data tools and techniques.
This capstone requires significant amount of hands-on lab effort throughout the course. You’ll exhibit your knowledge and proficiency working with Python, Bash scripts, SQL, NoSQL, RDBMSes, ETL, MySQL, PostgreSQL, Db2, MongoDB, Apache Airflow, Apache Spark, and Cognos Analytics.
Upon successfully completing this Capstone, you should have the confidence and portfolio to take on real-world data engineering projects and showcase your abilities to perform as an entry-level data engineer.
- Build a complete data and analytics platform.
- Setup, manage and query relational and NoSQL databases.
- Create data pipelines and ETL processes using Apache Airflow.
- Design and populate a star/snowflake schema data warehouse and query it using SQL.
- Analyze warehouse data using Business Intelligence (BI) tool Cognos Analytics to create reports and dashboards.
- Deploy a big data machine learning model using Apache Spark.
- Lectures 0
- Quizzes 0
- Duration 6 weeks
- Skill level All levels
- Language English
- Students 0
- Assessments Yes