Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
We ran the baseline scan and attack strategy scan against our model using the Red Team SDK. Although the output was generated, the logs show an error. Should I consider the report valid or invalid?
Upon reviewing the provided report, it is my recommendation that it be considered invalid for the purposes of an actual red team assessment. Below are the key reasons supporting this conclusion:
Feel free to accept this as an answer.
Thankyou for reaching out to The Microsoft Q&A Portal