Setup & Installation
What This Skill Does
Azure AI Content Safety SDK for Python connects to Microsoft's content moderation API to analyze text and images for harmful content. It classifies results across four harm categories (hate, sexual, violence, self-harm) with severity scores. Supports custom blocklists for domain-specific filtering. Building this manually would require training and maintaining your own classifiers across four harm categories with calibrated severity levels, whereas this SDK offloads that to a managed API with a single function call per request.
When to use it
- Working with azure ai contentsafety py functionality
- Implementing azure ai contentsafety py features
- Debugging azure ai contentsafety py related issues
