AIP-C01 Study Hub
Safety & Security Week 3 · Wednesday

Day 17: Governance + Responsible AI + AgentCore Policy

Learning Objectives

  • - Use AgentCore Policy with Cedar rules for agent action boundaries
  • - Apply SageMaker Clarify for bias detection in GenAI outputs
  • - Create model cards for documentation and transparency
  • - Design multi-account governance with SCPs
  • - Understand LLM-as-a-judge for fairness evaluations

Tasks

Tasks

0/4 completed
  • Blog30m

    AgentCore Policy and Evaluations Blog

    Cedar policies, natural language authoring, trust-but-verify approach.

  • Read30m

    SageMaker Clarify for Bias Detection

    Pre-training and post-training bias analysis. Model explainability.

  • Blog20m

    Self-Service Digital Assistant with Lex and Bedrock KB

    Amazon Lex + Bedrock for structured dialogue chatbots. Contact center pattern.

  • Study60m

    Governance Topics Review

    Multi-account SCPs, adversarial testing, LLM-as-a-judge for fairness, Connect + Lex + Bedrock for contact centers.

Exam Skills

Write your understanding, then reveal the reference answer.

0/7 reviewed

Hands-On Lab

Build real muscle memory with these activities.

advanced 45 min

Configure AgentCore Policy with Cedar Rules

Set up Cedar policies to control agent behavior and limit which tools an agent can invoke.

  1. 1 Open the AgentCore console and navigate to Policy
  2. 2 Create a new policy store for your agent
  3. 3 Write a Cedar policy: permit (principal, action == Action::"invokeToolGroup", resource == ToolGroup::"approved-tools")
  4. 4 Add a deny rule: forbid (principal, action, resource) when { context.sensitivityLevel > 3 }
  5. 5 Test the policy by invoking the agent with both permitted and denied tool requests
Open Lab
intermediate 30 min

Review SageMaker Clarify Bias Reports

Explore SageMaker Clarify bias detection capabilities for understanding model fairness.

  1. 1 Open SageMaker Studio and navigate to the Clarify section
  2. 2 Review the pre-training bias metrics: Class Imbalance (CI), Difference in Proportions of Labels (DPL)
  3. 3 Review post-training bias metrics: Disparate Impact (DI), Difference in Positive Proportions in Predicted Labels (DPPL)
  4. 4 Understand the SHAP-based feature attribution for model explainability
  5. 5 Note how these apply to GenAI: evaluate model outputs for fairness across demographic groups
Open Lab

Scenarios

Think through each scenario before revealing the answer.

D3: Safety & SecurityHard
#13

HR Resume Screening Bias Mitigation

An HR department deploys a resume screening agent. The legal team raises concerns about demographic bias. How do you address this?
Think First
  • Which service detects bias in model outputs?
  • How do you document the model's intended use and limitations?
  • How do you prevent the agent from making final hiring decisions?

Practice Questions

17 questions across 3 difficulty levels.

Further Reading

Go deeper into today's topics.