An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
Hi Tony Williams and Saurabh Gupta
Apologize for the delay in response.
It is not able to pick up my content safety resource in personal subscription.
Have not got a chance to sync internally with relevant teams for confirming depreciation on content safety resources.
Workaround solution is to test "Custom Categories " feature under "Guard Rails and Controls"/"Try out" option.
Available region for Standard - East Us and Australia East, you can leverage an East Us2 AI resource for evaluation too.
Reference
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview#region-availability
Thank you for understanding.