Detailed Course Outline
Day 1
Module 1: Introduction to MLOps
- Processes
 - People
 - Technology
 - Security and governance
 - MLOps maturity model
 
Module 2: Initial MLOps: Experimentation Environments in SageMaker Studio
- Bringing MLOps to experimentation
 - Setting up the ML experimentation environment
 - Demonstration: Creating and Updating a Lifecycle Configuration for SageMaker Studio
 - Hands-On Lab: Provisioning a SageMaker Studio Environment with the AWS Service Catalog
 - Workbook: Initial MLOps
 
Module 3: Repeatable MLOps: Repositories
- Managing data for MLOps
 - Version control of ML models
 - Code repositories in ML
 
Module 4: Repeatable MLOps: Orchestration
- ML pipelines
 - Demonstration: Using SageMaker Pipelines to Orchestrate Model Building Pipelines
 
Day 2
Module 4: Repeatable MLOps: Orchestration (continued)
- End-to-end orchestration with AWS Step Functions
 - Hands-On Lab: Automating a Workflow with Step Functions
 - End-to-end orchestration with SageMaker Projects
 - Demonstration: Standardizing an End-to-End ML Pipeline with SageMaker Projects
 - Using third-party tools for repeatability
 - Demonstration: Exploring Human-in-the-Loop During Inference
 - Governance and security
 - Demonstration: Exploring Security Best Practices for SageMaker
 - Workbook: Repeatable MLOps
 
Module 5: Reliable MLOps: Scaling and Testing
- Scaling and multi-account strategies
 - Testing and traffic-shifting
 - Demonstration: Using SageMaker Inference Recommender
 - Hands-On Lab: Testing Model Variants
 
Day 3
Module 5: Reliable MLOps: Scaling and Testing (continued)
- Hands-On Lab: Shifting Traffic
 - Workbook: Multi-account strategies
 
Module 6: Reliable MLOps: Monitoring
- The importance of monitoring in ML
 - Hands-On Lab: Monitoring a Model for Data Drift
 - Operations considerations for model monitoring
 - Remediating problems identified by monitoring ML solutions
 - Workbook: Reliable MLOps
 - Hands-On Lab: Building and Troubleshooting an ML Pipeline