Setup & Installation
What This Skill Does
Analyzes text and images for harmful content using Azure AI Content Safety. Detects hate speech, sexual content, violence, and self-harm across four severity levels. Supports custom blocklists for domain-specific term filtering. It returns structured severity scores per category rather than a binary pass/fail, so you can apply different thresholds per context without building your own classifier.
When to use it
- Working with azure ai contentsafety ts functionality
- Implementing azure ai contentsafety ts features
- Debugging azure ai contentsafety ts related issues
