azure-ai-contentsafety-java
sickn33/antigravity-awesome-skills
This SDK enables developers to build robust content moderation applications in Java using the Azure AI Content Safety services. It provides comprehensive capabilities to analyze both text and images for harmful content, detecting categories such as hate speech, violence, and sexual material. Furthermore, the SDK allows for custom blocklist management, enabling users to filter specific undesirable keywords or phrases to ensure strict content compliance and safety across various enterprise applications.