Back to skills

azure-ai-contentsafety-ts

security

Analyzes text and images for harmful content using Azure AI Content Safety. Detects hate speech, sexual content, violence, and self-harm across four severity levels. Supports custom blocklists for dom

Setup & Installation

npx skills add https://github.com/microsoft/azure-ai-contentsafety-ts --skill azure-ai-contentsafety-ts
or paste the link and ask your coding assistant to install it
https://github.com/microsoft/azure-ai-contentsafety-ts
View on GitHub

What This Skill Does

Analyzes text and images for harmful content using Azure AI Content Safety. Detects hate speech, sexual content, violence, and self-harm across four severity levels. Supports custom blocklists for domain-specific term filtering. It returns structured severity scores per category rather than a binary pass/fail, so you can apply different thresholds per context without building your own classifier.

When to use it

  • Working with azure ai contentsafety ts functionality
  • Implementing azure ai contentsafety ts features
  • Debugging azure ai contentsafety ts related issues