Issue with Creating Storage Event Trigger in Azure Synapse – Access Token Missing Required Claims
I have Owner access on: The Azure Storage Account The Azure Synapse Workspace My storage account is a general-purpose v2 account. The Synapse workspace has system-assigned managed identity enabled. I assigned the following roles to the Synapse…
Create DB identity in synapse notebook
Hi, I want to create a Synapse notebook where can read data from Azure SQL server DB. So I need to connect the DB via jdbc and enter the user name and password like code below. Is there a way where I can store this username and password somewhere so that…
Pipelines are failing is Synapse Environment without any proper error specified?
Issue Description: Pipelines are failing in the Synapse environment without any specific error message or detailed logs. The pipelines run as usual but fail silently, and no error or warning is provided that can help diagnose the root cause of the…
How can I access the Synapse REST API from Spark notebooks in a Synapse workspace deployed with a private endpoint?
I'm trying to access the Synapse REST API from Spark notebooks in my Synapse workspace, which is deployed with a private endpoint and has public network access disabled. When I attempt to call the API from within a notebook, I get the following…

Synapse Create Lake Database Table using a Parquet file from python
I have a Lake Database in Synapse - Not a SQL DatabaseTo date I have been creating tables manually in the workspace GUI choosing "Create external table from data lake" and selecting a parquet file from my Azure Storage.I want to use Python to…
Issues Connecting to Infor Data Lake Using Spark JDBC in Azure Synapse Spark Notebook
Hi Team, I'm encountering difficulties while attempting to connect to Infor Data Lake using Spark JDBC within Azure Synapse. Despite configuring the necessary JDBC driver and ionapi file path, I consistently encounter the following…
How to improve data flow sink performance
Hi, I am using Synapse for ETL. When I only updated 3 records, my sink processing time still need 7 mins. My target table contains about 6 millions records. I tried to optimize the sink by setting round robin and hash with 8 partitions, but I feel the…
How can I use the Incremental Column property in the Sink activity in Azure Synapse Analytics Data Flow?
I am trying to transfer data from my source table to my destination table within a Data Flow. Since my table contains a large volume of data, I attempted to use the Incremental Column feature shown in the screenshot below by selecting a suitable column…
Pipeline to wait until spark cluster is available
Hello, We have a very small node cluster and due to multiple spark notebooks running at the same time the notebooks or pipeline fails with 6002 timedout error. Some notebooks do not start at all and fails. Is there a way we can queue the notebooks or…
Does Azure Synapse Dedicated SQL Pools support group-based authentication?
Requirement: Grant read-only (or specific) access to all tables in an Azure Synapse Dedicated SQL Pool for a set of users by assigning them to a group in Azure Entra ID. Execution: For example, say I've created a group on Entra ID:…
What are the best practices or recommendations for setting up Disaster Recovery (DR) for services hosted on Azure?
I'm looking for recommendations on how to set up disaster recovery (DR) for the following Azure services, which are deployed in both test and production resource groups. I’ve been asked to plan a comprehensive DR strategy (Particularly Synapse dedicated…

Unable to generate MSI token from Synapse while writing data to ServiceBus and EventHub
As per doc https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/microsoft-spark-utilities?pivots=programming-language-scala only the services for which Audience is available in list are allowed to generate a token using MsSparkUtils credential…
Error : External table 'habitant_gold' is not accessible because location does not exist or it is used by another process.
When I run my code : CREATE DATABASE datalake_gold_db; CREATE MASTER KEY ENCRYPTION BY PASSWORD ='Welcome@MS92'; CREATE DATABASE SCOPED CREDENTIAL MyCred WITH IDENTITY='SHARED ACCESS SIGNATURE', SECRET='SAS Token'; CREATE EXTERNAL DATA SOURCE…
There are not enough vcores resources available in the workspace while running the synapse dataflows
Hi Team, we are getting the below error dataflow while running in for each activity, Could you please help us to fix this **There are not enough resources available in the workspace, details: 'Your job requested 8 vcores. However, the workspace…
How to create a incremental pipeline in dataflow need to delete records if that is not present in source
Hi Team, Could you please help us, we have to create incremental pipeline in synapse dataflow, we should have to handle insert, update and delete(if the record not present in source we have to delete in target Azure sql), can you please confirm is it…
All accountid's in the contact-table are NULL when using Azure Synapse Link or a Synapse pipeline to parquet
I've tried two ways to copy data from our Dynamics 365 to a Storage Account on Azure: Azure Synapse Analytics: creating a pipeline that connects to Dynamics 365 and copies the data the a parquet file on a storage account. I tried fetching the data in…
Synapse Analytics - ActionTimedOut
Hello Support Team, For the past three days, our processes have been encountering recurring "ActionTimedOut" errors. No recent code changes have been made, and there has been no significant increase in data processing volume. As you can see,…
New column in CosmosDB NoSQL does not show up in Analytical Store
When we add a new column to new documents in our CosmosDB NoSQL container, the new column does not show up in Analytical Store unless we add that new column to every other document in the container. This breaks not just queries for the new column, but…
Data Sync Delays with Microsoft Synapse
The process of exporting data from Dynamics to the new Data Lake (DL) via Dataverse and converting it to parquet format is taking around an hour, which is significantly slower than the existing 15–20 minute performance via export to datalake. This delay,…
Cold Start Query Performance
When running queries against the parquet format in the new DL, the initial "cold start" query takes significantly longer (e.g., 50 seconds) compared to subsequent queries (e.g., 10 seconds). This is double the time compared to the existing DL…