AIP-C01 Study Hub
FM Integration Week 1 · Wednesday

Day 3: SageMaker for GenAI

Learning Objectives

  • - Understand SageMaker endpoint deployment patterns
  • - Know when to use SageMaker JumpStart vs Bedrock
  • - Understand LoRA, QLoRA, and full fine-tuning trade-offs
  • - Use SageMaker Model Registry for version management
  • - Apply SageMaker Clarify for bias detection and Model Monitor for drift

Tasks

Tasks

0/6 completed
  • Read30m

    SageMaker JumpStart Foundation Models

    How to deploy foundation models via JumpStart. Understand the deployment configuration options.

  • Read20m

    SageMaker Model Registry

    Version management, approval workflows, deployment pipelines for models.

  • Read20m

    SageMaker Clarify for Bias Detection

    Pre-training and post-training bias detection. Model explainability.

  • Read20m

    SageMaker Ground Truth and Ground Truth Plus

    Generating evaluation datasets and human feedback loops for GenAI.

  • Blog30m

    Advanced Fine-Tuning Methods on SageMaker AI

    Compares LoRA, QLoRA, and full fine-tuning. Key for exam questions about model customization.

  • Blog20m

    Tutorials Dojo AIP-C01 Exam Guide (SageMaker coverage)

    Community exam guide covering SageMaker topics for the certification.

Exam Skills

Write your understanding, then reveal the reference answer.

0/2 reviewed

Hands-On Lab

Build real muscle memory with these activities.

intermediate 45 min

Deploy a JumpStart Foundation Model Endpoint

Use SageMaker JumpStart to deploy a foundation model endpoint and test inference.

  1. 1 Open SageMaker Studio and navigate to JumpStart
  2. 2 Search for a foundation model (e.g., Llama 3 8B or Mistral 7B)
  3. 3 Click Deploy and select an ml.g5.2xlarge instance
  4. 4 Wait for the endpoint to reach InService status (5-10 minutes)
  5. 5 Send a test inference request using the Studio notebook or AWS CLI and verify the response
Open Lab
intermediate 30 min

Explore LoRA Fine-Tuning Configuration on Bedrock

Walk through the Bedrock fine-tuning console to understand LoRA configuration options.

  1. 1 Open the Bedrock console and navigate to Custom models → Fine-tuning
  2. 2 Select a supported model (e.g., Amazon Nova or Llama) and click Create fine-tuning job
  3. 3 Review the training data format requirements (JSONL with prompt/completion pairs)
  4. 4 Examine the hyperparameter options: epochs, batch size, learning rate, warmup steps
  5. 5 Note the S3 bucket requirements for training data and output artifacts (do not submit unless you have training data ready)
Open Lab
beginner 20 min

Review SageMaker Model Registry Workflow

Explore Model Registry for versioning and approval workflows using the SageMaker console.

  1. 1 Open SageMaker Studio and navigate to Model Registry
  2. 2 Create a new Model Package Group called 'genai-models'
  3. 3 Review the approval status workflow: PendingManualApproval → Approved → Deployed
  4. 4 Note how model versions are tracked with metadata, metrics, and lineage
Open Lab

Scenarios

Think through each scenario before revealing the answer.

D1: FM IntegrationHard
#3

Healthcare Model Fine-Tuning

A healthcare company has domain-specific medical terminology that general FMs handle poorly. They need to fine-tune a model on their proprietary medical corpus. Which path?
Think First
  • What fine-tuning method minimizes compute cost while adapting to domain language?
  • Where do you store and version the fine-tuned model?
  • What compliance requirements does healthcare impose on deployment?

Practice Questions

11 questions across 3 difficulty levels.

Further Reading

Go deeper into today's topics.