Setup & Installation
What This Skill Does
Java SDK for Azure AI Content Safety. Provides APIs to analyze text and images for harmful content across four categories: hate, sexual, violence, and self-harm. Also supports custom blocklists for domain-specific term filtering. Azure's pre-trained models cover all four harm categories out of the box, so you skip training your own classifiers and get a severity scale you can threshold directly in your moderation logic.
When to use it
- Working with azure ai contentsafety java functionality
- Implementing azure ai contentsafety java features
- Debugging azure ai contentsafety java related issues
