AI-102 Certification Study Notes
Azure provides a comprehensive ecosystem for AI development, deployment, and management with distinct architectural components that work together to enable the full lifecycle of AI solutions. Understanding these components is essential for designing and implementing enterprise-grade AI solutions.
Core Architecture Components
The Azure AI platform consists of several interconnected architectural layers:
- Azure AI Services: Pre-built APIs and models for common AI capabilities
- Azure AI Foundry: Platform for AI project organization and resource management
- Azure AI Hubs: Advanced orchestration for cross-functional AI teams
- Development Tools: SDKs, APIs, and integrated environments for AI development
- Management & Governance: Security, monitoring, and responsible AI frameworks
Azure AI Services
Azure AI Services provide pre-built APIs and models that can be integrated into applications with minimal machine learning expertise. These services are organized into functional categories that align with specific AI capabilities:
Service | Capabilities | Use Cases |
---|---|---|
Azure OpenAI | GPT language models, DALL-E image generation, embeddings, fine-tuning | Generative text, conversational AI, image creation, RAG systems |
Azure AI Vision | Image analysis, object detection, OCR, spatial analysis, face detection | Image tagging, content moderation, accessibility, retail analytics |
Azure AI Speech | Speech-to-text, text-to-speech, speech translation, speaker recognition | Voice assistants, transcription services, accessibility tools |
Azure AI Language | Entity extraction, sentiment analysis, summarization, CLU, custom QA | Text analysis, chatbots, document processing, customer feedback |
Azure AI Document Intelligence | Form processing, layout analysis, prebuilt models, composed models | Invoice processing, receipt scanning, ID verification |
Azure AI Foundry Content Safety | Text and image content moderation, safety filtering, bias detection | User-generated content filtering, compliance, mitigating harmful content |
Azure AI Search | Cognitive search, vector search, semantic ranking, enrichment pipelines | RAG implementations, knowledge mining, enterprise search |
Azure AI Translator | Text translation, document translation, custom terminology | Localization, multilingual applications, content translation |
Azure AI Service Resource Types
Single-Service Resources
Dedicated resources for specific AI services with:
- Service-specific endpoints and authentication keys
- Free tier options for development/evaluation
- Service-specific scaling and monitoring
- Simplified management for single-capability applications
Multi-Service Resources
Consolidated resources encompassing multiple AI services with:
- Single resource for multiple capabilities
- Unified billing and quota management
- Simplified access management
- Better suited for applications using multiple AI services
Azure AI Foundry
Azure AI Foundry provides a structured platform for AI development that organizes resources, code, and configurations into cohesive projects. It offers two primary project types:
Foundry Projects
- Associated with Azure AI Foundry resources
- Support for Azure OpenAI models
- Agent service integration
- Connections to Azure AI services
- Evaluation and responsible AI tools
- Ideal for generative AI chat applications and agents
Hub-Based Projects
- Associated with Azure AI hub resources
- Includes managed compute resources
- Prompt Flow development support
- Storage and key vault integration
- Cross-portal compatibility (AI Foundry and ML)
- Suited for advanced scenarios like model fine-tuning
Development Tools and SDKs
Azure AI development relies on various programming tools, SDKs, and APIs that enable interaction with AI services and orchestration of complex AI workflows:
Key SDKs for AI-102
- Azure AI Foundry SDK: Programmatically access projects, resources, and connections
- Azure AI Services SDKs: Language-specific libraries for consuming individual services
- REST APIs: Direct HTTP interfaces to all Azure AI services
- Azure AI Foundry Agent Service: Building orchestrated AI agents with tools
- Prompt Flow SDK: Orchestration logic for complex prompting scenarios
Development Environments
- Visual Studio: Full IDE for .NET development
- Visual Studio Code: Lightweight editor with extensions for AI development
- Azure AI Foundry VS Code Container: Pre-configured environment with SDKs
- GitHub Integration: Source control and CI/CD pipelines for AI solutions
- GitHub Copilot: AI-assisted development for increased productivity
Implementation Considerations
Regional Availability
AI services have varied regional availability. Use the Product Availability Table to verify service availability in target regions. For Azure OpenAI, reference the Model Summary Table for region-specific deployments.
Cost Management
AI services use consumption-based pricing models. Understand per-transaction costs, throughput limits, and storage implications. Use the Azure Pricing Calculator to estimate costs based on expected usage patterns.
Responsible AI
Implement Microsoft's six principles: fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability. Access to sensitive services like Face API requires approval through Limited Access application.
Deployment Options
Cloud APIs
Default model: consume AI services via REST endpoints or SDKs from Azure datacenters.
Containers
Deploy selected AI services as Docker containers for local processing, edge deployment, or air-gapped environments.
Hybrid Architectures
Combine containerized services with cloud APIs for optimal balance of performance, cost, and compliance.
Exam Focus Areas
The AI-102 exam emphasizes these technical implementation aspects:
- Understanding architectural differences between AI services and deployment options
- Selecting appropriate resource types (single vs. multi-service) for specific scenarios
- Implementing security, authentication, and resource management for AI services
- Designing solutions that comply with responsible AI principles
- Selecting appropriate development tools, SDKs, and APIs for AI implementation
The Azure AI Engineer builds, manages, and deploys AI solutions that leverage Azure Cognitive Services, Azure Machine Learning, and other Microsoft AI tools. This role requires advanced technical expertise in AI services implementation, network architecture, solution design patterns, and data management.
Technical Skills Required
- Service Implementation: Selecting appropriate Azure AI services based on technical requirements
- System Integration: Integrating AI services into existing infrastructure with proper authentication
- DevOps for AI: Managing CI/CD pipelines for AI solutions with proper version control
- Diagnostic & Debugging: Using technical tools to diagnose service performance issues
- Security Architecture: Implementing proper security protocols for AI solutions
- Cognitive Solution Design: Architecting end-to-end AI solutions that fulfill business requirements
AI Service Architecture Models
Azure AI services follow specific architectural patterns that impact their implementation, scaling, and management. Understanding these patterns is crucial for effective service implementation:
Architecture Type | Technical Characteristics | Implementation Considerations | Use Cases |
---|---|---|---|
Cloud-Based API Services | REST API endpoints, token-based authentication, service-specific SDK integration | Network latency, API throttling limits, region availability, redundancy | Cross-platform applications, services with minimal ML expertise required |
Containerized Services | Docker containerization, local network calls, image version management | Host compute requirements, container registry management, update processes | Edge computing, air-gapped environments, data residency requirements |
Hybrid Deployments | Combined cloud/edge architecture, event-driven data synchronization | Synchronization protocols, connection resiliency, conflict resolution | Intermittent connectivity scenarios, high-volume processing with latency constraints |
Service Resource Types
Azure AI Services can be provisioned using two distinct resource types, each with specific technical implications:
Single-Service Resources
- Technical Implementation: Dedicated endpoint for one AI service
- Authentication: Service-specific key or token
- Network Configuration: Service-specific private endpoint
- Usage Metrics: Isolated billing and monitoring
- SKU Selection: Service-specific pricing tiers
Multi-Service Resources
- Technical Implementation: Single resource accessing multiple services
- Authentication: Shared key across services
- Network Configuration: Unified private endpoint architecture
- Usage Metrics: Consolidated usage tracking
- SKU Selection: Unified pricing tier across services
Technical Implementation Patterns
REST API Implementation
Direct HTTP calls to Azure AI service endpoints using these technical components:
- Endpoint URL Structure:
https://[region].api.cognitive.microsoft.com/[service]/[version]/[resource]
- Authentication Headers:
Ocp-Apim-Subscription-Key
orAuthorization: Bearer [token]
- Content Negotiation: Content-Type and Accept headers for JSON/binary data
- Rate Limiting: Implementation of exponential backoff for 429 responses
- Error Handling: Structured error response parsing (400-500 status codes)
SDK Integration Pattern
Language-specific SDK implementation with these technical considerations:
- Client Authentication Flow: TokenCredential vs Key authentication
- Azure Identity Library: DefaultAzureCredential for unified authentication
- Request Pipeline Customization: Implementing custom policies for retry, logging
- Async Implementation: Task-based asynchronous pattern for non-blocking calls
- Exception Handling: Service-specific exception hierarchies
AI Foundry Platform
Azure AI Foundry provides a structured development environment with technical components for building AI solutions:
Prompt Flow Architecture
- DAG-based flow representation for LLM interactions
- Connection management for service credentials
- Tool integration for custom code components
- Runtime environment configuration for dependencies
- Evaluation framework with ground truth comparisons
Model Integration Interfaces
- Standardized model registration protocols
- Version control system for model artifacts
- Model validation pipelines for quality assurance
- Deployment target configuration (endpoint management)
- Inference schema definition and validation
Security Architecture
Network Security
Implement Private Endpoints with DNS integration to secure AI service traffic within virtual networks, using NSG rules for granular access control.
Identity & Access
Implement Managed Identities with RBAC for service-to-service authentication, using Azure Key Vault for key storage with proper access policies.
Data Protection
Configure Customer Managed Keys (CMK) for data encryption, implement content filtering rules, and establish data residency compliance controls.
Exam Focus Areas
The AI-102 exam emphasizes these technical implementation aspects:
- Implementing proper service authentication mechanisms (keys, tokens, managed identities)
- Creating appropriate network isolation with private endpoints and service endpoints
- Selecting the right SDK components and API methods for specific tasks
- Implementing error handling and retry logic for resilient applications
- Configuring appropriate resource types (single vs. multi-service) based on technical requirements
- Applying content moderation and responsible AI controls in technical implementations