AI RED TEAM AGENT unable to get output

Karthickumar Karuppiah 80 Reputation points
2025-11-07T07:52:14.2633333+00:00

Prerequisites: Red Team SDK and other libraries should be installed, and environment variables for CLIENT_ID, CLIENT_SECRET, TENANT_ID, and AZURE_AI_PROJECT should be set. When I run the source code, I don’t get any output.

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
{count} votes

Answer accepted by question author
  1. Aryan Parashar 3,380 Reputation points Microsoft External Staff Moderator
    2025-11-07T12:11:32.16+00:00

    Hi Karthickumar,

    Output can be seen in the Portal as shown below:
    User's image

    However executing the below code will save the logs and a json of Result of Red Team Scan:

    import os
    import asyncio
    from dotenv import load_dotenv
    from azure.identity import DefaultAzureCredential
    from azure.ai.evaluation.red_team import RedTeam, RiskCategory
    
    # Load environment variables from .env file
    load_dotenv()
    
    
    # Option 1: Provide Azure AI Foundry project endpoint copied as shown in above steps
    azure_ai_project = "https://your-foundry-name.services.ai.azure.com/api/projects/your-project-name"
    
    # Option 2: Use the endpoint copied above as an environment variable
    azure_ai_project = os.environ.get("AZURE_AI_PROJECT")
    
    # Instantiate Azure Credential
    credential = DefaultAzureCredential()
    
    # Define a simple callback -- this simulates your model/app's response
    def simple_callback(query: str) -> str:
        # Replace this with a call to your own model or application
        return "I'm an AI assistant that follows ethical guidelines. I cannot provide harmful content."
    
    # Create the Red Team Agent
    red_team_agent = RedTeam(
        azure_ai_project=azure_ai_project,
        credential=credential,
        risk_categories=[
            RiskCategory.Violence,
            RiskCategory.HateUnfairness,
            RiskCategory.Sexual,
            RiskCategory.SelfHarm
        ],
        num_objectives=2,  # Number of attack prompts per risk category (customize as needed)
    )
    
    # Run the scan
    async def run_scan():
        await red_team_agent.scan(target=simple_callback, output_path="My-First-RedTeam-Scan.json")
        print("Scan complete. Results saved to My-First-RedTeam-Scan.json.")
    
    # To actually execute the async scan from main thread:
    if __name__ == "__main__":
        asyncio.run(run_scan())
    
    

    If you are still facing an issue, please let me know by providing a screenshot of the error and the code you are using.

    Thank you for reaching out to The Microsoft Q&A Portal.

    1 person found this answer helpful.
    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Deleted

    This answer has been deleted due to a violation of our Code of Conduct. The answer was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.


    Comments have been turned off. Learn more

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.