Hello Karthickumar Karuppiah,
Welcome to Microsoft Q&A and Thank you for reaching out.
The Red Team Agent SDK is primarily designed to help evaluate AI safety risks by simulating a wide range of adversarial behaviors. It focuses on Microsoft-defined risk categories and attack strategies such as harmful content generation, jailbreak attempts, evasion, and deception-based attacks.
While the SDK is excellent for safety-related testing, it does not provide direct or full coverage of the OWASP Top 10 for LLMs. OWASP risks extend beyond prompt behavior and often involve broader security concerns such as insecure plugin integrations, supply chain vulnerabilities, model theft, or infrastructure misconfigurations. These require traditional security testing and are not the core objective of the Red Team Agent SDK.
That said, several OWASP-related threats especially prompt injection, indirect prompt injection, and other manipulation-based attacks can be simulated with the SDK. These attacks fall under adversarial interaction categories, and the Red Team Agent can generate harmful or deceptive prompts to test the model’s robustness.
In addition, the Red Team SDK supports automated scans, allowing you to run repeated attack scenarios across the supported safety categories. This helps you quickly identify potential weaknesses and understand how the model responds under different adversarial conditions.
Although the SDK doesn’t formally state support for OWASP Top 10 testing, you can still experiment with prompt injection scenarios by crafting custom attack strategies. This makes it possible to simulate common injection behaviors even if they are not officially mapped to OWASP classifications.
After executing your tests, you can evaluate the attack success rate and analyze the results to pinpoint vulnerabilities. These insights will help you strengthen your system’s safeguards, refine mitigation strategies, and improve your overall security posture.
The Red Team Agent SDK is a powerful tool for assessing safety risks and prompt-level adversarial robustness, including many injection-related attacks. For complete OWASP Top 10 coverage, however, you’ll still need to combine it with broader application and infrastructure security testing. If you’d like, I can help map each OWASP risk to the right testing approach.
Please refer this
- Understanding Red Teaming for Azure OpenAI Service
- AI Red Teaming Agent - Azure AI Foundry
- Azure AI content Safety configuration
I Hope this helps. Do let me know if you have any further queries.
If this answers your query, please do click Accept Answer and Yes for was this answer helpful.
Thank you!