Hello Sanmeet Singh,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
I understand that you are having the assistant response consistently returns None when using the Azure OpenAI Python SDK in a multi-step workflow involving vector store uploads and thread-based requests.
I will provide you step-by-step improvement to your current implementation.
- Add detailed logging and exception handling during vector store creation and file upload to catch and log any issues.
def upload_files_to_vector_store(self, file_streams): try: print("Uploading files to vector store...") seen_files = set() unique_files = [] for file_stream in file_streams: file_name = file_stream.name if file_name not in seen_files: seen_files.add(file_name) unique_files.append(file_stream) print(f"Adding file: {file_name}") else: print(f"Skipping duplicate file: {file_name}") # Create the vector store vector_store = self.client.beta.vector_stores.create(name="My Vector Store") print(f"Vector store created with ID: {vector_store.id}") # Upload files to vector store batch = self.client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store.id, files=unique_files ) print("File batch upload completed. Batch ID:", batch.id) # Update assistant with the vector store ID self.client.beta.assistants.update( assistant_id=self.assistant_id, tool_resources={"file_search": {"vector_store_ids": [vector_store.id]}} ) print("Assistant updated with vector store.") self.vector_store_id = vector_store.id return self.vector_store_id except Exception as e: print(f"Error in vector store upload: {e}") return None
This will make sure your Vector Store Creation and File Upload.
Secondly, also make sure every step of the assistant response process has proper logging and that you capture all events.
thread = self.client.beta.threads.create(
messages=[{"role": "user", "content": prompt}],
tool_resources={"file_search": {"vector_store_ids": [self.vector_store_id]}}
)
def event_handler(event):
if event.data.get('type') == 'error':
print(f"Error event: {event.data}")
else:
print(f"Event received: {event.data}")
try:
with self.client.beta.threads.runs.stream(
thread_id=thread.id,
assistant_id=self.assistant_id,
instructions="Provide a detailed response.",
event_handler=event_handler,
) as stream:
response = stream.until_done()
print("Stream response completed.")
except Exception as e:
print(f"Error during assistant streaming: {e}")
And this will help Requesting Assistant Response with Proper Logging.
- Make sure that the uploaded files are indexed and accessible by querying the vector store directly and double-check your Azure OpenAI subscription limits and ensure all necessary configurations (e.g., assistant ID, vector store ID) are valid. Also, your event_handler should provide detailed information about what happens during the
stream
process. If errors occur, they will be captured here.
Other things you can do next are:
- Capture and review logs to understand what happens during each stage.
- Use
print
orlogging
to inspect the raw responses and events. - Make sure you are using the latest version of the
azure-openai
SDK.
I hope this is helpful! Do not hesitate to let me know if you have any other questions.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.