Share via

Handling Nested JSON Arrays from Business Central APIs using Rest in Azure Data Factory – Best Approach?

Riddhi 0 Reputation points
2026-03-17T06:12:33.5933333+00:00

I’m building a dynamic ingestion pipeline in Azure Data Factory to load data from Microsoft Business Central APIs into Azure SQL staging tables, and I’ve encountered several challenges related to the JSON structure returned by the API.

A typical Business Central response looks like this:

{ "@odata.context": "...",

"value": [

{ "No":"00001A", "SystemId":"..." },

{ "No":"53929", "SystemId":"..." }

]

}

When pagination is involved, the structure often ends up stored in Blob like this:

[

{"value":[{"No":"00001A","SystemId":"..."}]},

{"value":[{"No":"53929","SystemId":"..."}]},

{"value":[{"No":"US007518","SystemId":"..."}]}

]

Issues Faced

1. OData Response Metadata

Business Central APIs return additional metadata such as @odata.context, which is not required for downstream ingestion and can interfere with schema inference in Azure Data Factory.

2. Nested value Array

The actual records are wrapped inside the value array, which prevents Copy Activity from dynamically ingesting the data as a flat dataset.

3. Copy Activity Limitation

Copy Activity expects a flat array of records, and while mappings can be defined, that breaks the metadata-driven dynamic pipeline design where multiple entities are ingested using a single pipeline and ForEach loop.

4. Mapping Data Flow Limitation

Mapping Data Flow can flatten nested arrays, but it requires Import Projection, which makes the schema static. Since the goal is to build a fully dynamic ingestion framework, this approach becomes difficult to maintain across multiple Business Central entities.

Question

Has anyone implemented a native or more efficient approach within Azure Data Factory to dynamically handle the value array and OData metadata returned by Business Central APIs, without introducing an Azure Function step?

I’d appreciate any suggestions or alternative patterns used in similar Business Central ingestion pipelines.

Azure Data Factory
Azure Data Factory

An Azure service for ingesting, preparing, and transforming data at scale.

0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Q&A Assist
    2026-03-17T06:12:59.5566667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Within Azure Data Factory, the most native and maintainable pattern for handling Business Central OData responses (with @odata.* metadata and a nested value array) without Azure Functions is:

    1. Use REST connector + pagination to land raw JSON “as-is” in file storage
      • Configure a REST dataset for the Business Central API.
      • Use the REST connector’s pagination rules so ADF issues multiple requests and writes all pages. The connector supports patterns where the next URL comes from headers or body, and can stop when an end condition is met.
      • To export the JSON response without caring about schema, use a Copy activity with no mapping defined. This enables the schema‑agnostic mode and writes the raw JSON response directly to Blob/ADLS.
      • This step preserves the original OData metadata and value array but decouples ingestion from transformation.
    2. Store JSON as Type II arrayOfObjects where possible
      • When writing JSON via Copy activity, set filePattern to arrayOfObjects if the response is already an array of records or can be normalized to that pattern.
      • ADF’s JSON handling in Copy activity supports setOfObjects and arrayOfObjects. arrayOfObjects aligns with the value: [ ... ] structure and is the most natural pattern for downstream processing.
    3. Use Mapping Data Flow with JSON source configured for arrays
      • In a Mapping Data Flow, use a JSON source pointing to the landed files.
      • Under JSON settings in the source options, choose the appropriate Document Form:
        • If each file is a JSON array (e.g., the value array persisted as the file root), use Array of documents.
        • If each file is a single JSON object containing value: [ ... ], use the default document form and then flatten the value array.
      • Mapping Data Flows support JSON as a source and can read from Blob, ADLS Gen1/Gen2, SFTP, and S3.
    4. Flatten the value array in Mapping Data Flow
      • Use the Flatten transformation to denormalize the value array into rows.
      • Configure the Flatten transformation to:
        • Set the Unroll by property to the value array.
        • Project the fields from the unrolled items as columns.
      • This removes the nested array structure and produces a flat rowset suitable for loading into Azure SQL staging tables.
    5. Handle OData metadata by ignoring it in schema
      • The OData metadata properties (@odata.context, @odata.nextLink, etc.) can simply be omitted from the projection in the data flow.
      • Since the Flatten transformation focuses on the value array, the metadata fields remain at the root and are not part of the flattened output unless explicitly selected.
    6. Dynamic / metadata‑driven pattern considerations
      • For a metadata‑driven framework where multiple entities are ingested:
        • Use a generic REST dataset and parameterize the relative URL, query parameters, and pagination rules per entity.
        • Land each entity’s raw JSON into a parameterized folder/path in Blob/ADLS.
        • Use a single generic Mapping Data Flow where:
          • The source dataset path is parameterized (entity name, date, etc.).
          • The Flatten transformation always unrolls the value array.
          • Downstream schema mapping to Azure SQL can be driven by metadata (e.g., dynamic sink mapping or separate per‑entity mappings if needed).
      • While Mapping Data Flow’s Import projection is static, the JSON source can still operate generically if the structure of the value array is similar across entities, or if the framework is designed to handle a known set of schemas.
    7. Avoiding BOM / encoding pitfalls when chaining Copy → Data Flow
      • If Copy activity is used to generate JSON files that are later consumed by a JSON source in data flows, be aware:
        • Copy activity writes Unicode JSON with BOM for UTF encodings.
        • Data Flow JSON source with Single document enabled does not support Unicode with BOM.
        • Issues like JSON parsing error, unsupported encoding or multiline can occur when using arrayOfObjects with BOM and Single document in Data Flow.
      • To avoid this:
        • Prefer arrayOfObjects and configure the JSON source’s Document Form accordingly instead of using Single document where BOM is present.
        • Or adjust encoding settings so the combination of Copy sink and Data Flow source is supported.

    This pattern stays entirely within ADF: REST connector + Copy activity (schema‑agnostic) → JSON files (arrayOfObjects or object with value array) → Mapping Data Flow JSON source + Flatten transformation → Azure SQL staging. It removes the need for Azure Functions while handling both the value array and OData metadata in a dynamic, reusable way.


    References:


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.