Day 17: Governance + Responsible AI + AgentCore Policy
Learning Objectives
- - Use AgentCore Policy with Cedar rules for agent action boundaries
- - Apply SageMaker Clarify for bias detection in GenAI outputs
- - Create model cards for documentation and transparency
- - Design multi-account governance with SCPs
- - Understand LLM-as-a-judge for fairness evaluations
Tasks
Tasks
0/4 completed- Blog30m
AgentCore Policy and Evaluations Blog
Cedar policies, natural language authoring, trust-but-verify approach.
- Read30m
SageMaker Clarify for Bias Detection
Pre-training and post-training bias analysis. Model explainability.
- Blog20m
Self-Service Digital Assistant with Lex and Bedrock KB
Amazon Lex + Bedrock for structured dialogue chatbots. Contact center pattern.
- Study60m
Governance Topics Review
Multi-account SCPs, adversarial testing, LLM-as-a-judge for fairness, Connect + Lex + Bedrock for contact centers.
Exam Skills
Write your understanding, then reveal the reference answer.
Hands-On Lab
Build real muscle memory with these activities.
Configure AgentCore Policy with Cedar Rules
Set up Cedar policies to control agent behavior and limit which tools an agent can invoke.
- 1 Open the AgentCore console and navigate to Policy
- 2 Create a new policy store for your agent
- 3 Write a Cedar policy: permit (principal, action == Action::"invokeToolGroup", resource == ToolGroup::"approved-tools")
- 4 Add a deny rule: forbid (principal, action, resource) when { context.sensitivityLevel > 3 }
- 5 Test the policy by invoking the agent with both permitted and denied tool requests
Review SageMaker Clarify Bias Reports
Explore SageMaker Clarify bias detection capabilities for understanding model fairness.
- 1 Open SageMaker Studio and navigate to the Clarify section
- 2 Review the pre-training bias metrics: Class Imbalance (CI), Difference in Proportions of Labels (DPL)
- 3 Review post-training bias metrics: Disparate Impact (DI), Difference in Positive Proportions in Predicted Labels (DPPL)
- 4 Understand the SHAP-based feature attribution for model explainability
- 5 Note how these apply to GenAI: evaluate model outputs for fairness across demographic groups
Scenarios
Think through each scenario before revealing the answer.
HR Resume Screening Bias Mitigation
- •Which service detects bias in model outputs?
- •How do you document the model's intended use and limitations?
- •How do you prevent the agent from making final hiring decisions?
Practice Questions
17 questions across 3 difficulty levels.
Further Reading
Go deeper into today's topics.
AgentCore Policy — Cedar Policies for Agent Governance
Getting started: write Cedar policies via natural language or code, intercept agent-tool calls, audit trail.
AgentCore Policy (GA) — Cedar Policies
Cedar policies, natural language authoring, trust-but-verify approach for agent governance.
Self-Service Digital Assistant with Lex and Bedrock KB
Amazon Lex + Bedrock for structured dialogue chatbots. Contact center pattern.
PII Redaction Architecture — BDA + Guardrails
Bedrock Data Automation + Guardrails for email PII pipeline.